Invited Speakers

Major Achievements in Software Studies

  • Wolfgang Paul (Saarland University, Saarbrücken, Germany Germany)

    Theory of Multicore Hypervisor Verification

    In the years 2007 to 2010 researchers from Microsoft and the German Verisoft-Xt project attempted the complete formal verification of a Hypervisor for multi core processors, which is part of Windows 7. The project succeeded to develop a tool called VCC, which permits the formal verification of concurrent C code with quite satisfactory productivity, provided one knows what theorems to prove. When the project ended in 2010 crucial and tricky portions of the hypervisor product were formally verified, but one was far from having an overall theory of multicore hypervisor correctness even on paper. Since 2010 most of this theory has been worked out. In this talk we survey this surprisingly rich theory, give references to the portions which are worked out and identify the work which still needs to be done.

  • František Plášil (Charles University, Prague, Czech Republic Czech Republic)

    Software Components in Computer Assisted Living?

    Component-based software engineering has developed mature techniques for modeling software by composition of components. They facilitate modeling of many kinds of systems, ranging from enterprise systems to embedded control systems. The common denominator of these systems is that their architecture is relatively static (i. e. the systems do not significantly evolve in runtime). This is however in strong contrast to characteristics of modern ubiquitous systems that aim at assisting humans in their lives (e. g. systems for smart-transportation, smart-energy, eldercare services) and that are one of key priorities of EU R&D programs (e. g. FP7 ICT, ITEA2, ARTEMIS). Such systems are typically open-ended and need to dynamically evolve their architecture in response to changes in the physical world. In this talk, we investigate these future systems and outline challenges and ways of addressing their development via components.

Foundations of Computer Science

  • Peter Sanders (KIT, Karlsruhe, Germany Germany)

    Engineering Algorithms for Large Data Sets

    For many applications, the data sets to be processed grow much faster than can be handled with the traditionally available algorithms. We therefore have to come up with new, dramatically more scalable approaches. In order to do that, we have to bring together know-how from the application, techniques from traditional algorithm theory, and on low level aspects like parallelism, memory hierarchies, energy efficiency, and fault tolerance. The methodology of algorithm engineering with its emphasis on realistic models and its cycle of design, analysis, implementation, and experimental evaluation can serve as a glue between these requirements. the talk outlines the general challenges and gives examples from my work like sorting, full text indexing, graph algorithms, and database engines.

  • Gerhard J. Woeginger (Eindhoven University of Technology, The Netherlands The Netherlands)

    Coalition Formation in Hedonic Games

    In many economic, social and political situations individuals carry out activities in groups (coalitions) rather than alone and on their own. Examples range from households and sport clubs to research networks, political parties and trade unions. The underlying game theoretic framework is known as "coalition formation". The talk discusses the central concepts and algorithmic approaches in the area, provides many examples, and poses a number of open problems.

Software & Web Engineering

  • Sjaak Brinkkemper (Utrecht University, The Netherlands The Netherlands)

    Software Production: Research Challenges of the Software Industry

    Increasingly software products are being offered in an online mode, also called software-as-a-service. Serving millions of users possibly spread over the world from a central software producing organization brings about many challenges and will require several innovations from the software engineering domain. In this keynote we will introduce the notion of software production that unifies the whole range of software development activities with the continuous operations of hosting and updating the online software products. We will present an overview of recent results in the area of software product management, software implementation, software operation knowledge, and software ecosystems that make up some of the research areas of software production. This overview is accompanied with a series of intriguing challenges for the research community.

  • Dirk Riehle (Friedrich-Alexander-University of Erlangen-Nuremberg, Germany Germany)

    Best of (our) Empirical Open Source Research

    Open source software is publically developed software. Thus, for the first time, we can broadly analyse in data-driven detail how people program, how bugs come about, and how we could improve our tools. In this talk, I'll review six years of our open source empirical (data) research and highlight the most interesting insights, including how different (or not) open source is from closed source programming.

Data, Information and Knowledge Engineering

  • Fabien Gandon (INRIA, Sophia Antipolis, France France)

    ISICIL: Semantics and Social Networks for Business Intelligence

    The ISICIL initiative (Information Semantic Integration through Communities of Intelligence onLine) mixes viral new web applications with formal semantic web representations and processes to integrate them into corporate practices for technological watch, business intelligence and scientific monitoring. The resulting open source platform proposes three functionalities: (1) a semantic social bookmarking platform monitored by semantic social network analysis tools, (2) a system for semantically enriching folksonomies and linking them to corporate terminologies and (3) semantically augmented user interfaces, activity monitoring and reporting tools for business intelligence.

  • Aldo Gangemi (Semantic Technology Lab, Rome, Italy Italy)

    Discovering, Recognizing, and Using Knowledge Patterns

    The symbolic level of information is currently quite well understood in terms of "pattern science" for tasks such as discovery and recognition (cf. Keith Devlin's and Ulf Grenander's work, as applied to data, images, text and graphs). Far less understood is how to perform those tasks to the knowledge level, i. e. to interpreted symbols (whatever that interpretation is considered to consist of), on which a machine can reason. At STLab we are trying to fill this gap, by studying ontology design patterns, as well as performing empirical research on knowledge patterns from data, lexicons, text, web links, etc. An overview will be given, and some past and future experiments will be presented or proposed to the community.

Social Computing and Human Factors

  • Michael Beetz (University of Bremen, Germany Germany)

    Cognition-Enabled Autonomous Robot Control for the Realization of Home Chore Task Intelligence

    This talk gives an overview of cognition-enabled robot control, a computational model for controlling autonomous service robots to achieve home chore task intelligence. For the realization of task intelligence, this computational model puts forth three core principles, which essentially involve the combination of reactive behavior specifications represented as semantically interpretable plans with inference mechanisms that enable flexible decision making. The representation of behavior specifications as plans enables the robot to not only execute the behavior specifications but also to reason about them and alter them during execution. I will provide a description of a complete system for cognition-enabled robot control that implements the three core principles, demonstrating the feasibility of our approach.

  • Arnold Smeulders (University of Amsterdam, The Netherlands The Netherlands)

    Searching Things in Large Sets of Images

    In this presentation we discuss the challenges of computer vision in general search through images and videos. The point of departure is the digital content of the images, a set of 50 to 100 examples, and smart features, machine learning and computational tools to find the images best answering a one-word specification. This problem is far from being solved, yet the progress as recorded in the yearly TRECvid competitions for image search engines is impressive. We will discuss the difference between the "where" and "what" in images, discuss the need for invariance, fast computational approaches, and the state of the art in image search engines.