We present a unified view to sequential algorithms for many pattern matching problems, using a bit-wise simulation of a non-deterministic finite automaton (NFA) built from the pattern which uses the text as input. This approach gives very fast practical algorithms which have good complexity for small patterns on a RAM machine with word length O(log n).
Ricardo Baeza-Yates received his Ph.D. in CS form the U. of Waterloo, Canada, in 1989. During 1993, he received the Organisation of American States award for young researchers in exact sciences. Currently he is full professor at the CS department of the University of Chile. His reserch interests include algorithms and data structures, text retrieval, graphical interfaces and visualization applied to databases.
Contributions from professional organisations like IFIP, the International Federation for Information Processing, to the Advances in Informations Technology.
In this talk we consider algorithms that search game trees, defined by
full information zero sum games, like chess, checkers and the like. In
such a tree the nodes are board positions and there is an edge between
two nodes of a move from the position corresponding to the source node
results in the position corresponding to the target node. Almost every
game tree search algorithm computes the so called minimax value of the
root position which is defined as the value of the end position that the
first player will obtain under optimal play of both adversaries.
Arie de Bruin graduated in 1977 at the Free University in Amsterdam in Mathematics and Computer Science. He has been PhD student at the Mathematical Centre (nowadays called CWI) in Amsterdam from that time until 1981, at which time he became employed at the Erasmus University in Rotterdam, nowadays as a full professor. His current research interests are, besides semantics, combinatorial optimization, game tree search, and language design.
This presentation will begin by reviewing recent research into flexible system software with an emphasis on systems that have employed object-oriented approaches such as the use of frameworks or meta-object protocols. We will then describe how these techniques have been employed in the Tigger project at Trinity College to develop a family of object-support operating systems intended for use in a variety of distributed applications ranging from embedded soft real-time systems to concurrent engineering environments. Some of the novel features of Tigger including its use of framework technology, a distributed shared memory (DSM) system supporting a new approach to consistency maintenance, and the use of meta-object protocols to allow system behaviour to be altered at run-time will be described.
Vinny Cahill is a lecturer in Computer Science at Trinity College. As a member of the Distributed Systems Group, he has been conducting research into system and language support for O-O distributed programming for a number of years. He is currently applying this research to areas such as the development of distributed virtual worlds and concurrent engineering environments.
One of the strengths of structured methods in software engineering is their use of diagrams to visualize the structure of a system. By contrast, the use of formal notations, does not provide a good medium for communication with the non-mathematician. The aim of the work reported here is to combine formality with visual appeal, to alleviate the communication problem with the non-specialist, and to accelerate the uptake of formal methods into industry. The inadequacies of traditional Entity-Relationship, Data-Flow and State-Transition diagrams are discussed in relation to VDM, and two new kinds of diagram are proposed as more appropriate to the underlying expressiveness of VDM.
With undergraduate and postgraduate degrees in Computing Science from Imperial College, London, Jeremy Dick has been involved in academic and industrial research into mathematically formal methods for 15 years. He has recently spent 4 years working for Bull Information Systems in Paris and London, managing various development projects, and now works for Oxford-based B-Core (UK) Ltd., a formal methods CASE tool supplier. Convinced of the benefits of formal methods, he is keenly interested in promoting their uptake in industry.
The challenge of a new, fundamental approach to building visualization systems finds currently much resonance in the computer science community. The rendering pipeline used as a fundamental concept in computer graphics systems is found to be inadequate for visualization systems. The need for optimizing cost factors, such as storage size (large search spaces needed) or the time needed for human decision--making based on large, complex information structures, has to be recognized. Partial answers are being found in the use of new paradigms, such as the paradigm of information workspaces; in the use of intelligent agents to apply contraints to the mapping process; or in the formulation of hardware requirements to support adequate user interaction.
Gitta Kienegger-Domik is Professor at the University of Paderborn, Department of Mathematics and Computer Science. Her M.Sc. at the Graz University of Technology (Austria) in 1981 was followed with a Ph.D. at the same University in 1985. Her research interests are in the area of visualization concepts and techniques, user interface design, and scientific data analysis.
The goal of the Arias project is to provide extensions to Unix, that support distributed environments. These extensions should at the same time allow distributed applications to share data, allow protection of shared data, and allow data to be stored permanently (i.e., stored on secondary storage). The Arias system builds on a unique virtual address space, where each object is named by the same virtual address on all the machines in the system. Data is duplicated on all sites that share it, and coherency is guaranteed on the basis of zones (a consecutive number of bytes), according to an coherency protocol. Protection is based on the notion of capabilities and protection domains.
Daniel Hagimont is ``Charge de Recherche'' (research scientist) at INRIA Rhone-Alpes (Grenoble). He received his PhD from Institut National Polytechnique de Grenoble in 1993. In 1993-94, he held a post-doctoral position at the University of British Columbia (Vancouver, Canada). He is currently a member of the Sirac project at INRIA, Grenoble, where he leads a group working on distributed shared memory.
This will be an informal talk about the graphical specification language of statecharts, and the related STATEMATE system. Some of the issues that arose in developing the language and the tool will be discussed from both personal and technical points of view. A number of research areas that have evolved from this work in the 12 years since the language was proposed will be discussed. Time permitting, recent work on a version of statecharts tailored for object-oriented development will be described too.
David Harel is the William Sussman Professor of Mathematics at the Weizmann Institute in Israel, and was chairman of its Dept. of Applied Math. and Computer Science from 1989 to 1995. He received his PhD from MIT in 1978. His research interests are in computability and complexity theory, logics of programs, theory of databases, automata theory, visual languages and systems engineering.
Advances in the use of automated decision-theoretic reasoning to enhance the human-computer interface will be presented. Following a brief survey of research on Bayesian networks and influence diagrams, principles of embedded decision-theoretic agents will be discussed. Key concepts will be highlighted within the contexts of work on the Vista and Lumiere projects. The Vista project centered on building an intelligent interface to assist NASA Mission Control flight engineers with managing the complexity of information about Space Shuttle propulsion systems in time-critical contexts. Moving from aerospace to personal computing, related work on the Lumiere project at Microsoft Research will be presented. The goal of Lumiere is to develop more responsive interfaces for personal computers.
Eric Horvitz is a Senior Researcher at Microsoft Research and an Affiliate Associate Professor at the University of Washington. He received a PhD and MD at Stanford University. His research interests include decision-theoretic approaches to problem solving, particularly for inference and action under limited time or memory, and applications of probability and utility in operating systems, user interfaces, information retrieval, and medicine.
Maintenance, migration and renovation are becoming the predominant concerns of the large users of software in business and industry. How can problems in these areas be addressed from a software engineering perspective? First, the need for system renovation will be explained and an outline of the problems and opportunities involved will be given. Next, an incremental renovation strategy that is based on two complementary technologies will be sketched: abstract datatypes as a basis for generic program manipulation tools and processes as the foundation for coordination languages.
Paul Klint heads the Programming Environments research group at CWI, and is Professor in Computer Science at the University of Amsterdam. He received a PhD from Eindhoven University of Technology (1982). His current research interests include software coordination architectures, system renovation, language prototyping, and the application of formal specification techniques to industry-sized problems.
In many common protocols, faults are fixed GLOBALLY, i.e., involving the entire system. Clearly, using global measures to detect and correct errors is becoming less and less realistic in today's fast growing networks. However, the large inner diversity of each such network (for example, the Internet) makes faults more unavoidable. Thus, it is essential to develop scalable fault-handling tools. We discuss research efforts aimed at taking advantage of the observation that faults are very often of extremely LOCAL nature, and involve only a small number of hosts. We mention results about Self Stabilization, Local Checking, Fault Local Mending, and other paradigms.
Shay Kutten (PhD in CS 1987, The Technion, Israel) is now on leave of absence in the Technion. He has been a researcher with IBM T.J. Watson Research Center,managed there the Networks Architecture and Algorithms group, and led the network security project. For his contributions to IBM and its products in the areas of Broad Band Networks, and of Networks Security, he received twice the IBM Outstanding Innovation Award. He is an editor of ACM journals, and served on program committees for several conferences.
The lecture will give an overall presentation on scientific computing, with focus on the following aspects: (i) mathematical and algorithmical foundations, complexity of solved problems; (ii) software engineering issues, esp. code optimization and fine tuning, support for very long program development cycles, portability and efficiency, debugging and validation (the last two issues are of utmost importance when programs are used as replacement for experiments in application domain); (iii) distributed and parallel computing requirements. All the above mentioned aspects of HPC will be presented using examples from the computational chemistry.
Ludek Matyska currently serves as Research Director of the Institute of Computer Science at Masaryk University, Brno. His research interests are in the area of constraint logic programming and recently in distributed high performance computing with special emphasis on fault tolerance and robustness.
The talk covers two basic communication issues in parallel computing: Routing in networks, and data organization in distributed memory machines. The first part, Routing in Networks, surveys the current research on efficient routing protocols for store-and-forward and wormhole routing. In particular, universal routing protocols that work well for wide classes of networks are considered. The second part, data organization in distributed memory machines, surveys hashing techniques for simulating shared memory, using redundant representation of the data.
Friedhelm Meyer auf der Heide finished his PhD in 1981 in Bielefeld, and his Habilitation in 1986 in Frankfurt. From 1980 to 1986 he was assistant at the CS department in Frankfurt, with one year on leave at the IBM Almaden Laboratory in San Jose. From 1986 to 1989 he was professor at the CS department in Dortmund, scince then he is full professor at the CS department in Paderborn. His research interests include parallel algorithms and architectures, randomized algorithms, and lower bound techniques.
In recent years many different approaches to motion planning have been suggested. In this talk we describe a new, probabilistic paradigm for solving the motion planning problem, which proves to be very time-efficient for a great variety of robots. An advantage over existing methods is its generality. There are only a few components that are robot specific, and these are, as will be pointed out, easy to define/implement. Furthermore, it is a learning approach, that is, it builds data-structures that, once constructed, can be used for retrieving arbitrary paths quasi-instantaneously. The power of the method will be demonstrated both by theoretical results and experimental results.
Mark Overmars received his PhD in 1983 from the Department of Computer Science, Utrecht University. He is currently a full professor at the same department. His early work was on dynamic data structures and computational geometry. His current research still falls in the area of computational geometry but with more focus on applications in such areas as computer graphics, robotics and computer vision.
The main intention of this contribution is to provide the readers with the fundamental knowledge of Common Object Request Broker Architecture (CORBA) with an emphasis upon one of its basic components: CORBA Object Services (COSS). The brief history of CORBA development from CORBA 1.1 to CORBA 2.0 supporing interoperability features is given. Key concepts specified in CORBA 1.1 and CORBA 2.0 are reviewed: Interface Description Languge (IDL), client stub, server skeleton, Basic Object Adapter (BOA), Internet Inter-ORB Protocol (IIOP), request-level bridging and inline bridging. Based on OMG series of Requests for Proposal (RFP), the evolution of enlarging the family of CORBA Object Services is analyzed and commented.
František Plášil is an associated professor of Computer Science and the vice-chair to the Department of Software Engineering at the Charles University, Prague. From 1989 to 1991 he was a visiting professor in Department of Mathematics and Computer Science, University of Denver. His recent research interests included programming languages and compiler construction.
Jan Kleindienst works at the Institute of Computer Science, Czech Academy of Sciences, Prague and makes his PhD studies at the Charles University, Prague.
Petr Tůma a PhD student in the Department of Sofware Engineering at the Charles University in Prague.
The notion of attribute is presented as a universal construct for describing conceptual and database models. Unifying tools based on the typed lambda calculus give possibilities to define appropriate notions for expressing an equivalency of schemes in different models. Moreover, the expressiveness of conceptual and data models is defined and compared to the recent approaches. A taxonomy of conceptual and database models based on these notions is proposed. Particular models will be compared with respect to their expressiveness. Finally, a method for integrating heterogeneous environment is designed. Practical consequences of the theory developed could help in constructing so called co-operative information systems as well as in transforming user requirements between two different kinds of information environment.
Jaroslav Pokorný received the Ph.D. degree in theoretical cybernetics from the Charles University, Prague, Czechoslovakia, in 1984. Currently he is an associate professor of the Faculty of Mathematics and Physics at Charles University and head of its Department of Software Engineering. His research interests include database design, information retrieval and software engineering methods.
Modern businesses are increasingly under pressure to improve competitiveness. In recent years, it has been realised that a significant way to improve effectiveness is to identify and optimise the key processes that organisations undertake to achieve their business objectives. One element that is often important is IT support, and one important aspect of modern IT support is business case processing (often called workflow). This talk will cover: Why process is important to organisations, a survey of available business case processing/workflow systems, highlighting those features which are most significant, a description of the results of a recent European project to develop a state of the art business case processing system.
Ken Robinson runs User Interface Design Group at the Rutherford Appleton Laboratory in the UK. Originally trained as a physicist, he soon gravitated to computing and has worked in a number of areas. For the last 15 years, he has been working with user interfaces for highlyinteractive systems, with application to real-world applications. Recent projects have included both formal and informal business case processing/workflow systems.
While the central role of learning in cognition is acknowledged by many, most lines of research nevertheless study reasoning phenomena separately from learning phenomena. We present a new framework for the study of reasoning, in which a learning component has a principle role. The Learning (in order) to Reason approach combines the interfaces to the world used by known learning models with the reasoning task and a performance criterion suitable for it. In particular, we show cases where simultaneously (1) reasoning from a given representation of the world is intractable, (2) learning representations of the world is intractable, but (3) directly learning to reason about the world is feasible.
Interested in theories of learning and inference and in particular in developing an understanding of how learning supports other high level cognitive tasks. Dan Roth received his Ph.D. in Computer Science from Harvard University in 1994. From 1981 until 1990 he was a software engineer and researcher for the Israeli Defense Forces -- R&D Unit and is now on the faculty at the Weizmann institute of Science, Israel.
State-of-the-art Data Mining technology offers a toolbox of algorithms and techniques developed for precisely defined applications. The usage of data mining technology requires therefore an intimate knowledge of this field. The KESO-project will develop a data mining system that is applicable for a wide variety of data mining problems by domain experts rather than data mining specialists. In the first part I will present an overview of the current state-of-the-art in data mining. The second part presents an overview of the KESO project: what are the concepts underlying its ``inductive query language'', how is it related to current algorithms, and how can it be implemented efficiently?
Arno Siebes received his his Ph.D. in Computer Science in 1990 from Twente University. His research interests are the semantics of Object-Oriented databases, the integration of database schema's, active databases, and data mining. Currently he leads both the data mining project at CWI and the ESPRIT-IV project KESO.
Kolmogorov complexity is a modern notion of randomness dealing with the quantity of information in individual objects. A new mathematical proof technique was developed, now known as the `incompressibility method'. The incompressibility method is a basic general technique such as the `pidgeon hole' argument, `the counting method' or the `probabilistic method'. Kolmogorov complexity was proposed by A.N. Kolmogorov to quantify the randomness of individual objects in an objective and absolute manner. Likewise, the Kolmogorov complexity of an object is a form of absolute information of the individual object. Applications of Kolmogorov complexity by us and others have been given in a plethora of areas.
Paul Vitanyi heads the Algorithms and Complexity research group at CWI, and is Professor of Computer Science at the University of Amsterdam He is a member of the Dutch Institute for Programming research and Algorithmics (IPA), and the Institute for Logic, Language and Computation (ILLC) and the Dutch Graduate School in Logic (OzsL)
Iterative solution methods are very attractive for classes of very large linear systems, for which direct approaches are impractical because of CPU- and computer memory requirements. We will present an overview of a number of related modern iterative methods for the solution of unsymmetric linear systems of equations. Some more attention will be given to modern hybrid methods, such as Bi-CGSTAB, Bi-CGstab(l), and GMRESR. Iterative methods are often used in combination with so-called preconditioning operators (approximations for the inverses of the operator of the system to be solved) in order to speed up convergence. In our lectures we will also discuss general implementation aspects of iterative methods.
Dr. Henk A. van der Vorst is a professor in numerical analysis and computational science in the Mathematical Department of Utrecht University in the Netherlands. He received a Ph.D. in applied mathematics from the University of Utrecht in 1982. His current research interests include iterative solvers for linear systems, large sparse eigenproblems, overdetermined systems, and the design of algorithms for parallel computations.