"; ?> SOFSEM 2004
Ralf Schweimeier and Michael Schroeder: "Rules and the Semantic Web"

Abstract:
Much of the information currently online is intended for humans: Web pages e.g. contain extensive rendering information beside their contents and do not formally describe the structure of their contents. As a result, it is difficult to automatically process and integrate this information. The semantic web aims to address this problem. It provides an infrastructure to define local and global ontologies in order to create a common reference of meaning for the data and meta-data and to reason over it. Thus facilitating automatic and intelligent information for distributed data sources.

An important aspect of the semantic web are rules. They can define relationships between the data, integrity constraints, policies, reactive behaviour, workflows, etc. The RuleML effort tries to standardise the synatx of such rules. The talk briefly outlines RuleML and raises a number of issues that need to be solved.

1) Although a rule syntax may be standardised, semantics can vary greatly. Argumentation is a well-studied metaphor to define semantics of logic programs. In the talk I will define an argumentation framework and relate a number of credulous and sceptical semantics, which arise. Some are well known such as Dung's and Prakken and Sartor's semantics or the well-founded semantics, WFSX, while others are novel. The main point is that the framework charaterises and relates these semantics by a subset relationship, thus working towards a greater transparency, which is essential to reason on the web.

2) Rules will in the end be used by humans. Therefore the knowledge will sometimes be fuzzy or queries contain typos and thus do not match perfectly. I will show how to extend the above framework to deal with fuzzy reasoning and introduce fuzzy unification, which is based on edit distance between strings. Fuzzy unification conservatively extends classical unification and introduces a degree of unification, which is based on the similarity of the strings representing the predicates to be unified.

3) Besides deduction rules, reaction rules are of importance to define reactive behaviour. I will define vivid agents, whose core is a knowledge such as described above and whose behaviour is defined by action and reaction rules. I will review the syntax and semantics of vivid agents and discuss its implementation in PVM-Prolog, a Prolog interface to the Parallel Virtual Machine, and Mandarax, a light-weight rule-engine implemented in Java.

Finally, I will demonstrate how such a rule-engine can be used to build a semantic web for biologists. Biology could play the role for the semantic web that physics played for the current web. The reason is that the biological community is scattered, that large data repositories and tools are online accessible, that it is common practice to annotate data, and that large ontologies are already in use to provide the semantic glue between the different data and tools. With this background I show how to realise two sample applications in biology: an ontology-based literature search engine and a tool for protein interaction, which integrates data from PDB, the protein databank, and SCOP, the structural classification of proteins.

(All 6 sections below would take +/- 15 minutes each. The material will be based on existing talks, where appropriate.

Intro on rule and the web
Knowledge represenation, syntax, and expressiveness

Towards a hierarchy of semantics
Fuzzy reasoning and querying
Vivid Agents
The real world: Applications in bioinformatics)

*/ ?>