SOFSEM 2012 := Invited Speakers
Foundations of Computer Science
SPECIAL EVENT: SESSION ON TURING MACHINES.
- Felipe Cucker (City University of Hong Kong, Hong Kong )
The Legacy of Turing in Numerical Analysis
Alan Mathison Turing is revered among computer scientists for laying down the foundations of theoretical computer science via the introduction of the Turing machine, an abstract model of computation upon which, an elegant notion of cost and a theory of complexity can be developed. In this paper we argue that the contribution of Turing to "the other side of computer science", namely the domain of numerical computations as pioneered by Newton, Gauss, ..., and carried out today in the name of numerical analysis, is of an equally foundational nature.
- Peter van Emde Boas (University of Amsterdam, The Netherlands )
Turing Machines for Dummies
Regardless the precise formal model for Turing Machines used by some author, all authors agree on the fact that a full configuration arising during the computation of such a device must consist of three parts: 1) a finite section of tape with the symbols currently printed at each cell within this segment (all tape cells outside this segment implied to be blanks), 2) the current state of the finite control of the machine, and 3) the position of the reading head within the sequence of tape cells.
For mathematicians it is irrelevant how these configurations are formally written down as a sequence of symbols. One simply can write a triple of objects, or (alternatively) one can write down the sequence of tape symbols, marking the presently scanned symbol by printing the state symbol above, below or in front of the tape symbol (called the concise notation herafter). Both styles of notations have been used throughout the literature (in fact the concise notation is mentioned already in Turing's original paper). Only much later during the earlier days of Complexity Theory it was realized that using the concise notation has advantages when Turing machines are playing their part in Complexity Theory: proving lower bounds or impossibility results.
Presumably we can identify Larry Stockmeyer to be the first author having clearly brought forward these advantages to our knowledge. In my talk I will present an overview of some applications where the concise notation has been found to be useful.
- Jiří Wiedermann (Institute of Computer Science, Academy of Sciences, Czech Republic )
Towards Computational Models of Artificial Cognitive Systems that Can, in Principle, Pass the Turing Test
We will give plausible arguments in favor of a claim that we already have sufficient knowledge to understand the working of interesting artificial minds attaining a high-level cognition, consciousness included. Achieving a higher-level AI seems to be not a matter of a fundamental scientific breakthrough but rather a matter of exploiting our best theories of artificial minds and our most advanced data processing technologies. We list the theories we have in mind and illustrate their role and place on the example of a high-level architecture of a conscious cognitive agent with a potential to pass the Turing test.
- Yuri Gurevich (University of Michigan and Microsoft Research, USA )
What’s an Algorithm?
We attempt to put the title problem and the Church-Turing thesis into a proper perspective and to clarify some common misconceptions related to Turing's analysis of computation. We examine two approaches to the title problem, one well-known among philosophers and another among logicians.
- Giuseppe F. Italiano (University of Rome "Tor Vergata", Italy )
Strong Bridges and Strong Articulation Points of Directed Graphs
Given a directed graph G, an edge is a strong bridge if its removal increases the number of strongly connected components of G. Similarly, a vertex is a strong articulation point if its removal increases the number of strongly connected components of G. Strong articulation points and strong bridges are related to the notion of 2-vertex and 2-edge connectivity of directed graphs, which surprisingly seems to have been overlooked in the past. In this talk, we survey some very recent work in this area, both from the theoretical and the practical viewpoint.
Software & Web Engineering
- Paul De Bra (Eindhoven University of Technology, The Netherlands )
A Fully Generic Approach for Realizing the Adaptive Web
The GRAPPLE (EU FP7) project aimed at integrating Learning Management Systems with Adaptive Learning Environments (ALE) in order to support life-long learning. But instead of developing a dedicated Web-based ALE we developed an architecture containing a fully generic adaptive Web server, a distributed User Modeling Framework and a generic browser-based authoring environment for Domain Models and Conceptual Adaptation Models. The GRAPPLE architecture can be used for creating and serving any type of adaptive Web-based application. It supports content-, link- and presentation (layout) adaptation based (in any programmable way) on any desired user model information.
- Pavel Zezula (Masaryk University in Brno, Czech Republic )
Multi Feature Indexing Network (MUFIN) – Similarity Search Platform for many Applications
Due to extensive digitalization and use of computers connected in networks, search in many data collections is based on specific similarity metrics. The Multi Feature Indexing Network (MUFIN) represents a scalable and extensible similarity search platform for many applications. Its extensibility is achieved by accepting the metric space model of similarity – the technology works for any metric distance measure and can serve applications as diverse as biology, security, geography, multimedia, data cleaning and integration, etc. In order to scale into billions of objects searched on-line for hundreds of queries, structured peer-to-peer (P2P) similarity search networks are applied. In order to tune performance, MUFIN keeps a clear separation between the logical P2P structure and the hardware physical infrastructure which is used as service. MUFIN’s capability will be illustrated on a 100 million image collection searched by an aggregation of five MPEG-7 color and texture descriptors. We will also demonstrate application in digital news publishing and cooperation with existing text-based image retrieval systems. Finally, self-organizing search networks of the future will shortly be outlined.
Cryptography, Security, and Verification
- Orna Kupferman (Hebrew University in Jerusalem, Israel )
Recent Challenges and Ideas in Temporal Synthesis
In automated synthesis, we transform a specification into a system that is guaranteed to satisfy the specification against all environments. While model-checking theory has led to industrial development and use of formal-verification tools, the integration of synthesis in the industry is slow. This has to do with theoretical limitations, like the complexity of the problem, methodological reasons, like the lack of satisfactory compositional synthesis algorithms, and practical reasons: current algorithms produce systems that satisfy the specification, but may do so in a peculiar way and may be larger or less well-structured than systems constructed manually.
The research community has managed to suggest some solutions to these limitations, and bring synthesis algorithms closer to practice. Significant barriers, however, remain. Moreover, the integration of synthesis in real applications has taught us that the traditional setting of synthesis is too simplified and has brought with it new algorithmic challenges.
The talk will introduce the synthesis problem and algorithms for solving it, and will survey recent promising ideas in making temporal-synthesis useful in practice.
- Krzysztof Pietrzak (Cryptology Research Group, IST Vienna, Austria )
Efficient Cryptography from Hard Learning Problems
The learning parities with noise problem, also known as the problem of decoding random linear codes, is a well studied problem from coding theory. This problem has gained lots of attraction from the cryptographic community as a tool to construct cryptosystems which are so simple and efficient that they can be implemented on very weak devices like RFID tags. In this talk I will present some very recent constructions including secret and public-key authentication protocols.
SPECIAL EVENT: SESSION ON TURING MACHINES.
- Roberto Navigli (Sapienza University of Rome, Italy )
Don’t Take Shortcuts! Computational Lexical Semantics and the Turing Test
Do machines really need to understand text (like we do) in order to pass the Turing Test? If they do not, is the test robust enough? However, if they do, is it realistic to expect machines ever to be capable of grasping the meaning of what we write? An important task in computational lexical semantics is called Word Sense Disambiguation (WSD). WSD consists of automatically associating meaning with words in context. It is a long-standing problem in the field of computational linguistics and there can be no doubt it is a tough one. All too frequently researchers have obtained disappointing results not only in terms of disambiguation quality, but also when their WSD has been plugged into applications such as Information Retrieval and Machine Translation. Nevertheless, this pessimistic scenario has been progressively changing over the last decade, to the point that high disambiguation performance has been reported in recent work on the topic, indicating that WSD is a core component in enabling automatic text understanding and, hopefully, in achieving an authentic passing of the Turing Test.
- Kevin Warwick (University of Reading, United Kingdom )
Not Another Look at the Turing Test!
Practical application of the Turing Test throws up all sorts of questions regarding the nature of intelligence in both machines and humans. For example – Can machines tell original jokes? What would this mean to a machine if it did so? It has been found that acting as an interogator even top philosophers can be fooled into thinking a machine is human and/or a human is a machine – why is this? Is it that the machine is performing well or is it that the philosopher is performing badly? All these questions, and more, will be considered. Just what does the Turing test tell us about machines and humans? Actual transcripts will be considered with startling results.