SOFSEM Logo

  Invited Speakers

Foundations of Computer Science

  • Osamu Watanabe (Tokyo Institute of Technology, Japan Japan)

    Computational Complexity for Space Efficient Algorithms

    Space efficient algorithms have been actively (again) studied from requirements in practical applications. In particular, we consider here sublinear working space and polynomial-time algorithms w.r.t. an input size parameter specified by each problem.
    Investing the possibility of having such algorithms is also an important and interesting topic in computational complexity theory, in particular, for understanding the nature of "computation".
    We survey some recent results and discuss some open questions from the view point of computational complexity theory.

  • Sanjay Jain (National University of Singapore, Singapore Singapore)

    Learning Automatic Families of Languages

    In this talk we will survey learning of automatic families of languages. A class ${\cal L}$ of family of languages over some fixed alphabet $\Sigma$ is said to be automatic if there exists an indexing $(L_{\alpha})_{\alpha \in J}$ of the languages in ${\cal L}$ such that
    (a) $J$ is a regular set and
    (b) $\{\conv(\alpha,x): x \in L_{\alpha}, \alpha \in J\}$ is regular.

    Here, $\conv(\alpha,x)$ denotes the convolution of $\alpha,x$ obtained by pairing the $i$-th symbol of string $\alpha$ with $i$-th symbol of string $x$, where the smaller string is padded with a special symbol $\# \not \in \Sigma$.

    In this talk we will survey several results on learnability of automatic families by both general learners as well as automatic learners. Here an automatic learner maintains a memory and processes the data in cycles, in each cycle it reads one datum, updates its memory and makes a conjecure. The function mapping (old memory, datum) to (new memory, hypothesis) has to be automatic, that is, the convolution of its graph is regular. Investigations on automatic learners aim at understanding the interplay between memory limitations and computational power of these learners and to understand which restrictions are real limitations and which are anyway implied by the set-up of the learning framework.

    The results presented in this talk are based on joint work with John Case, Efim Kinber, Trong Dao Le, Qinglong Luo, Eric Martin, Yuh Shin Ong, Shi Pu, Pavel Semukhin, Frank Stephan and others.

  • Gilles Brassard (University of Montreal, Canada Canada)

    Cryptography in a Quantum World

    Cryptography, although practiced as an art and science for thousands of years, had to wait until the end of the 1940s before Claude Shannon gave it a strong mathematical foundation. However, Shannon's approach was rooted is his own information theory, itself inspired by the classical physics of Newton. But our world is ruled by the laws of quantum mechanics. When quantum-mechanical phenomena are considered, new vistas open up both for cryptographers (code makers) and cryptanalysts (code breakers). Some theorems (including by Shannon) remain mathematically correct, but become irrelevant in our quantum world. Most strikingly, it is possible for two people who do not share ahead of time a long secret key to communicate in perfect secrecy under the nose of an eavesdropper with unlimited computing power and whose technology is limited only by the known laws of physics. Conversely, quantum mechanics provides powerful tools to threaten the mechanisms that are currently used on the Internet to protect electronic transactions. Furthermore, it seems — but is not yet proven — that quantum mechanics provides more benefits to cryptanalysts than cryptographers if the latter are restricted to using only classical communication channels. So, in the end, is quantum mechanics a blessing or a curse to the protection of privacy? The jury is still out. No prior knowledge in quantum mechanics or cryptography will be expected from the audience.

Software Engineering: Methods, Tools, Applications

  • Michael Goedicke (University of Duisburg-Essen, Germany Germany)

    SEMAT – A Fresh Founding for Software Engineering

    SEMAT is an acronym for Software Engineering Method And Theory and is an association and a group of people which try to refound Software Engineering in a systematic and future proof way by building on established results and findings. This is done by identifying a core and extension mechanisms to support the description and management of complex software processes. We present the first important results with the emphasis on the work of the SEMAT theory track.

  • Dániel Varró (Budapest University of Technology and Economics, Hungary Hungary)

    Incremental Queries and Transformations: From Concepts to Industrial Applications

    Model-driven engineering (MDE) is widely used nowadays in the design of embedded systems, especially in the automotive, avionics or telecommunication domain. Behind the scenes, advanced design and verification tools in these domains frequently exploit advanced model query and transformation techniques to support various rich tool features. The rapid increase in the size and complexity of system models has drawn significant attention to incremental model query and transformation approaches, which enable fast and incremental reactions to model changes caused by systems engineers or automated design steps.

Data, Information and Knowledge Engineering

  • Norman Paton (University of Manchester, United Kingdom United Kingdom)

    Pay-as-you-go Data Integration: war stories and recurring themes

    Data integration typically seeks to provide the illusion that data from multiple distributed sources comes from a single, well managed source. Providing this illusion in practice tends to involve the design of a global schema that captures the user's data requirements, followed by manual (with tool support) construction of mappings from sources to the global schema. This overall approach can provide high quality integrations but at high cost, and tends to be unsuitable for areas with large numbers of rapidly changing sources, where users may be willing to cope with a less than perfect integration. Pay-as-you-go data integration has been proposed to overcome the need for costly manual data integration. Pay-as-you-go data integration tends to involve two steps. Initialisation: automatic creation of mappings (generally of poor quality) between sources. Improvement: the obtaining of feedback on some aspect of the integration, and the application of this feedback to revise the integration.
    This talk reviews some experiences with pay-as-you-go data integration, with a view to understanding what works well (or not), and identifying recurring themes.

  • Themis Palpanas (Paris Descartes University, France France)

    Big Sequence Management

    There is an increasingly pressing need, by several applications in diverse domains, for developing techniques able to index and mine very large collections of sequences, or data series. Examples of such applications come from biology, astronomy, entomology, the web, and other domains. It is not unusual for these applications to involve numbers of data series in the order of hundreds of millions to billions, which are often times not analyzed in their full detail due to their sheer size.
    In this talk, we describe recent efforts in designing techniques for indexing and mining truly massive collections of data series that will enable scientists to easily analyze their data. We show that the main bottleneck in mining such massive datasets is the time taken to build the index, and we thus introduce solutions to this problem. Furthermore, we discuss novel techniques that adaptively create data series indexes, allowing users to correctly answer queries before the indexing task is finished. We also show how our methods allow mining on datasets that would otherwise be completely untenable, including the first published experiments using one billion data series.
    Finally, we present our vision for the future in big sequence management research.

Back to Top