Invited Speakers

Keynote Talk

  • Jan van Leeuwen (Utrecht University, The Netherlands The Netherlands)
    Jiří Wiedermann (Institute of Computer Science, Academy of Sciences, Czech Republic Czech Republic)

    “What is Computation: An Epistemic Approach”

    Traditionally, computations are seen as processes that transform information. Definitions of computation subsequently concentrate on a description of the mechanisms that lead to such processes. The bottleneck of this approach is twofold. First, it leads to a definition of computation that is too broad and that precludes a separation of entities that, according to prevailing opinions, do perform computation from those which don't. Secondly, it also leads to a `machine-dependent' notion of computation, complicating the identification of computational processes. We present an alternative view of computation, viz. that of a knowledge generating process. From this viewpoint, computations create knowledge within the framework of `more or less' formalized epistemic theories. This new perception of computation allows to concentrate upon the meaning of computations – what they do for their designers or users. It also enables one to see the existing development of computers and information technologies in a completely new perspective. It permits the extrapolation of the future of computing towards knowledge generation and accumulation, and the creative exploitation thereof in all areas of life and science. The flux of our ideas on computation bring challeng-ing new problems to the respective research, with wide connotations in the field of artificial intelligence, in cognitive sciences, and in philosophy, epistemology and methodology of science.

Foundations of Computer Science

  • Magnús M. Halldórsson (Reykjavik University, Iceland Iceland)

    Progress (and Lack Thereof) for Graph Coloring Approximation Problems

    We examine the known approximation algorithms for the classic graph coloring problem in general graphs, with the aim to extract and restate the core ideas. We also explore a recent edge-weighted generalization motivated by the modeling of interference in wireless networks. Besides examining the current state-of-the-art and the key open ques-tions, we indicate how results for the classical coloring problem can be transferred to the approximation of general edge-weighted graphs.

  • Jared Saia (University of New Mexico, USA USA)

    “Truth, Lies and Random Bits”

    Random bits are used in computer science to break symmetry and deadlock, ensure load-balancing, find a representative sample, maximize utility, and foil an adversary. Unfortunately, true randomness is difficult to guarantee, especially in a distributed setting where some agents may not be trustworthy. What happens if a hidden cabal is generating bits that are not random? Can we detect and neutralize such behavior?
    In this talk, we address this question in the context of a classic problem in distributed computing: Byzantine agreement. In Byzantine agreement, n agents, each with a private input, must agree on a single common output that is equal to some agent's input. Randomization is provably necessary to solve this problem, but past random algorithms required expected exponential time. We describe a new spectral algorithm that requires expected polynomial time. Our algorithm is designed so that in order to thwart it, corrupted agents must engage in statistically deviant behavior that is detectable by examining the top eigenvector of a certain matrix. This suggests a new paradigm for reliable distributed computing: the design of algorithms that force an attacker into behavior that is statistically deviant and computationally detectable.

  • Piotr Sankowski (University of Warsaw, Poland Poland)

    Online Bipartite Matching in Offline Time

    I will present our (with Bartłomiej Bosek, Dariusz Leniowski and Anna Zych) recent results on the problem of maintaining maximum size matchings in incremental bipartite graphs (FOCS '14). In this problem a bipartite graph G between n clients and n servers is revealed online. The clients arrive in an arbitrary order and request to be matched to a subset of servers. In our model we allow the clients to switch between servers and want to maximize the matching size between them, i.e., after a client arrives we nd an augmenting path from a client to a free server. Our goals in this model are twofold. First, we want to minimize the number of times clients are reallocated between the servers. Second, we want to give fast algorithms that recompute such reallocation.
    As for the number of changes, we propose a greedy algorithm that chooses an augmenting path π that minimizes the maximum number of times each server in π was used by augmenting paths so far. We show that in this algorithm each server has its client reassigned O(√n) times. This gives an O(n3/2) bound on the total number of changes, what gives a progres towards the main open question risen by Chaudhuri et al. (INFOCOM '09) who asked to prove O(n log n) upper bound. Next, we argue that the same bound holds in the decremental case. Moreover, we show incremental and decremental algorithms that maintain (1−ε)--approximate matching with total of O(ε−1 n) reallocations, for any ε > 0.
    Finally, we address the question of how to effciently compute paths given by this greedy algorithm. We show that by introducing proper amortization we can obtain an incremental algorithm that maintains the maximum size matching in total O(√n m) time. This matches the running time of one of the fastest static maximum matching algorithms that was given by Hopcroft and Karp (SIAM J. Comput '73). We extend our result to decremental case where we give the same total bound on the running time. Additionally, we show O(ε−1 m) time incremental and decremental algorithms that maintain (1−ε)-approximate matching for any ε > 0. Observe that this bound matches the running time of the fastest approximate static solution as well.

Software & Web Engineering

  • Jiří Barnat (Masaryk University in Brno, Czech Republic Czech Republic)

    Quo Vadis Explicit-State Model Checking

    Model checking has been in the center of interest of formal verification research community for more than 25 years. Focusing primarily on the well-known state space explosion problem, a decent amount of techniques and methods have been discovered and applied to push further the frontier of systems verifiable with a model checker. Still, the technique as such has not yet been matured enough to become a common part of a software development process, and its penetration into the software industry is actually much slower than it was expected. In this paper we take a closer look at the so called explicit-state model checking, we briefly recapitulate recent research achievements in the field, and report on practical experience obtained from using our explicit state model checker DIVINE. Our goal is to help the reader understand what is the current position of explicit-state model checking in general practice and what are the strengths and weaknesses of the explicit-state approach after almost three decades of research. Finally, we suggest some research directions to pursue that could shed some light on the future of this formal verification technique.

  • Brian Fitzgerald (University of Limerick, Ireland Ireland)

    Two’s Company, Three’s a Crowd: A Case Study of Crowdsourcing Software Development*

    Crowdsourcing is an emerging and promising approach which involves delegating a variety of tasks to an unknown workforce — a crowd. Crowdsourcing has been applied quite successfully in various contexts – from basic tasks on Amazon Mechanical Turk to solving complex industry problems, e. g. Innocentive. Companies are increasingly using crowdsourcing to accomplish specific software development tasks. However, very little research exists on this specific topic. Our case study in a global ICT multinational company highlights on a number of challenges that arise when crowdsourcing software development. For example, the crowdsourcing development process is essentially a waterfall and this has to be eventually integrated with the agile approach used by the company. Crowdsourcing works better for specific software development tasks – those that do not have complex interdependencies. The development cost was much greater than originally expected, overhead in terms of company effort to prepare specifications and answer crowdsourcing community queries was much greater, and the time-scale to complete contests, review submissions and resolve quality issues was significant. Finally, quality issues were pushed later in the lifecycle given the lengthy process necessary to identify quality issues and have them resolved by the community. Given the emphasis in software engineering on identifying bugs as early as possible, this is quite problematic.

Data, Information and Knowledge Engineering

  • Rainer Manthey (University of Bonn, Germany Germany)

    Back to the Future – Should SQL surrender to SPARQL?

    We will take a closer look at the essential differences between two of the most prominent database query languages today, SPARQL and SQL, and at their underlying data models, RDF resp. the relational model (RM). There is an enormous “hype” around SPARQL/RDF at the moment claiming all kinds of advantages of these “newcomers” over the long-established SQL/RM setting. We discover that many of these claims are not justified, at least not as far as data representation and querying is concerned. Our conclusion will be that SQL/RM are well able to serve the same purpose as SPARQL/RDF if treated fairly, and if presenting itself properly. We omit all aspects of navigation over distributed or federated data resources, though, as SQL isn’t (yet) made for this task.

  • Barbara Catania (Univerisity of Genoa, Rome, Italy Italy)

    Adaptively Approximate Techniques in Distributed Architectures

    The wealth of information generated by users interacting with the network and its applications is often under-utilized due to complications in accessing heterogeneous and dynamic data and in retrieving relevant information from sources having possibly unknown formats and structures. Processing complex requests on such information sources is, thus, costly, though not guaranteeing user satisfaction. In such environments, requests are often relaxed and query processing is forced to be adaptive and approximate, either to cope with limited processing resources (QoS-oriented techniques), possibly at the price of sacrificing result quality, or to cope with limited data knowledge and data heterogeneity (QoD-oriented techniques), with the aim of improving the quality of results. While both kinds of approximation techniques have been proposed, most adaptive solutions are QoS-oriented. Additionally, techniques which apply a QoD-oriented approximation in a QoD-oriented adaptive way (called adaptively approximate techniques), though demonstrated potentially useful in getting the right compromise between precise and approximate computations, have been largely neglected. In this talk, after presenting and classifying several approximate and/or adaptive query processing approaches, proposed for different distributed architectures, we show, with some concrete examples, the benefits of using adaptively approximate techniques. We then present the result of our ongoing research in the context of data stream and geo-social data management.

Back to Top