SOFSEM 2017 := Tutorials
Andrew Butterfield (University of Dublin, Ireland )
Andrew Butterfield is a Professor in Trinity College Dublin, in the School of Computer Science and Statistics with a Ph. D. in Computer Science on the topic of Silicon Compilation (TCD 1990). His research interests switched from algorithms for hardware generation to the exploration of formal techniques for hardware and software modelling. Since 2003 he has been an active researcher in UTP, giving formal semantics to hardware compilation languages like Handel-C, and exploring timed variants of Circus, a language that fuses CSP and Z, with a UTP semantics. Recent work is looking at unification of various techniques to reason about concurrency in the presence of shared global mutable state (Owicki-Gries, Separation Logic, Rely-Guarantee ...). He is an active member of the UTP community, having served on the program committee of all UTP symposia to date, and chair of the 2nd (UTP2008).
Unifying Theories of Programming: Principles, Theories and Tools
Unifying Theories of Programming (UTP) is a framework for unifying the formal semantics of a wide range of programming, specification and modelling languages. It is based on an encoding of the relational calculus in predicate logic, coupled with a number of organising principles: programs-as-predicates; observational variables; pre/post-state relations; refinement; healthiness conditions; and Galois connections to link theories. This tutorial will introduce these principles, illustrating them with descriptions of some key theories that have been constructed using UTP, covering both sequential and concurrent systems. We will also introduce some of the tool support being developed, that span the spectrum from rapid "theory-prototyping" to full-blown mechanised formal proof.
Axel Legay (Rennes University, INRIA, France ) and
Louis-Marie Traonouez (Rennes University, INRIA, France )
Axel Legay is a permanent researcher at Inria Rennes (France) and an Industry Professor at Aalborg University (Denmark). He obtained a PhD thesis from the University of Liège (Belgium, 2007) and an habilitation thesis from the University of Rennes (France, 2015). He received several awards including the IBM prize for the best thesis in Computer Science (2008) and the Velux Prize (2014). Legay is the co-author of more than 250 publications in premier conferences and journals. Legay leads the Inria efforts on security and Systems of Systems. His main research interests are in developing formal specification and verification techniques for Software Engineering. His original contributions are in verification of infinite-state systems for which he offered one of the first generic approaches. He also made significant contributions in the area of modal transition systems and product line engineering. Legay is best known for his contribution to Statistical Model Checking for which he offered extension algorithms for rare events and non-determinism. His tool Plasma is used by both academics and engineers.
Louis-Marie Traonouez obtained his Ph. D in 2009 from the University of Nantes. He has been a postdoc at the University of Florence and in the MT-LAB danish research project. Since 2012 he is working at Inria in Rennes in the TAMIS group. His research interests concern formal verification and statistical model-checking. He has been working in particular on parameter synthesis for time Petri nets, robustness of timed automata, statistical model-checking of rare eventsand non-deterministic systems. He is contributing to the development of Plasma Lab, a platform for statistical model checking.
Plasma Lab Statistical Model Checker: Architecture, Usage and Extension
Plasma Lab is a modular statistical model checking (SMC) platform that facilitates multiple SMC algorithms and multiple modelling and query languages. Plasma Lab may be used as a stand-alone tool with a graphical development environment or invoked from the command line for high performance scripting applications. This tutorial first presents an overview of Plasma Lab architecture, modelling languages and algorithms. Then we present the usage of the tool to design models and perform various experiments Finally we propose a tutorial on how to develop a new plugin for Plasma Lab.
Cristina Seceleanu (Mälardalen University, Sweden )
Cristina Seceleanu is Associate Professor at Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems Division, Västerås, Sweden, and leader of the Formal Modeling and Analysis of Embedded Systems research group. She holds a M. Sc. in Electronics (Polytechnic University of Bucharest, Romania, 1993) and a Ph. D. in Computer Science (Turku Centre for Computer Science, Finland, 2005). Her research focuses on developing formal models and verification techniques for predictable real-time and adaptive systems. She currently is and has been involved as organizer, co-organizer, and chair for relevant conferences and workshops in computer engineering, and is a member of the Editorial Board of the Frontiers in ICT: Formal Methods, and the International Journal of Embedded and Real-Time Communication Systems.
Verification and Test-case Generation from Architectural Models of Automotive Systems
Architectural description languages, e. g. EAST-ADL, provide a comprehensive approach to describing complex embedded systems as standardized models that encapsulate structural, functional, and extra-functional information. The aim of such models is to enable the system’s documentation, design, early verification, and even code implementation. In this tutorial, we show how such models can be used one step further, as a basis of an analysis methodology that leads to eventual code verification for embedded systems. Our proposed methodology relies on automated model-based formal verification and test-case generation for both functional and timing requirements, assuming the EAST-ADL architectural model of the embedded system as input.
Stefan Naujokat (TU Dortmund University, Germany ),
Johannes Neubauer (TU Dortmund University, Germany ) and
Bernhard Steffen (TU Dortmund University, Germany )
CINCO: A Language Workbench for Domain-Specific Graphical Modeling Environments
Domain-specific tool support is the key towards opening development responsibility for a wider public without dedicated programming knowledge. This is, for example, important in areas like business process modelling or scientific workflows. The intention behind the tools and methods presented in this lecture lies in providing domain experts with appropriate (non-programming) tools, so that they can solve problems within their domain in a non-technical way and even construct domain-specific software products without dedicated programming knowledge.
The concepts will be explained along the Cinco framework, a meta tooling suite for developing domain-specific graphical development tools in terms of fully operative, open source Eclipse products. It can be regarded as a specialized instance of generic metamodeling frameworks, such as Eclipse EMF or JetBrains MPS, which radically eases their use through its domain-specific focus and overall generative approach: Cinco produces a ready-to-run modeling tool fully automatically from simple specifications. We will illustrate the ease of this approach in our hands-on tutorial part where every participant is expected to create her own graphical modeling tool.
Student Research Forum
Fayola Peters (University of Limerick, Ireland )
Fayola Peters is a post doctoral researcher at Lero – The Irish Software Research Centre, exploring adaptive data privacy, adaptive data sharing and data mining for users of software applications. She conducted her Ph. D. research (2011 – 2014) on data sharing for cross project defect prediction in software engineering at West Virginia University, USA. Her current research focusses on users of software applications who need to consider the tradeoff between the benefits of data sharing and potential privacy breaches. These users can be individuals as well as data collectors with the responsibility of protecting the data of others. Dr. Peters' work aims to assist users make this tradeoff with adaptive "data-based" sharing decisions, requiring minimal user input.
She has published at top software engineering venues like ICSE, IEEE TSE, and ESEM, and is a co-author of “Sharing Data and Models in Software Engineering”.
Becoming Goldilocks: Privacy and Data Sharing in “Just Right” Conditions for Software Engineering
Using the tools of quantitative data science, software engineers can predict and share useful information on new projects based on past projects. Such useful information can come from the software engineering process as well as from users of the software. A major impediment to such information analysis and sharing are the privacy concerns of software development organisations and software users. In this tutorial we share valuable lessons learned from years of data privacy research based on past software projects and show how we discovered these lessons via three privacy algorithms applied to software defect data and user location data from a social network. The lessons are: 1) Little changes in your data can offer privacy and maintain utility, 2) don’t share all your data, and 3) don’t share what others have already shared. These lessons can be considered when developing or using privacy algorithms for different types of software data, including data generated by users.