RW 2020
The 16th Reasoning Web Summer School
24-26 June 2020, Virtual
Declarative AI 2020 goes virtual by keeping the planned dates and extending the submission deadlines!
The registration is free of charge.
RW 2020
Program
(in Central European Time)
DAY 1 - June 24
10.00 Opening
10.30 Introduction to Probabilistic Ontologies (by R. Peñaloza)
There is no doubt about it; an accurate representation of a real knowledge domain must be able to capture uncertainty. As the best known formalism for handling uncertainty, probability theory is often called upon this task, giving rise to probabilistic ontologies. Unfortunately, things are not as simple as they might appear, and different choices made can deeply affect the semantics and computational properties of probabilistic ontology languages. In this tutorial, we explore the main design choices available, and the situations in which they may be meaningful or not. We then dive deeper into a specific family of probabilistic ontology languages that can express logical and probabilistic dependencies between axioms.
12.45 Lunch break
14.00 On the Complexity of Learning Description Logic Ontologies (by A. Ozaki)
Computational logic and machine learning are the pillars of artificial intelligence. The former stems from syllogisms while the latter may apply inductive reasoning. Logic provides methods for deriving valid specific conclusions based on a general theory asserted to be true. On the other hand, learning methods can be applied to generalize specific observations. These disciplines naturally complement each other. In this lecture, we will revisit the classical exact and probably approximately correct (PAC) learning models and study how algorithms designed for such models can draw a general theory, formulated in description logic, from specific observations. We will also recall from the literature other approaches that have been proposed for learning description logic ontologies.
16.15 Break
16.45 Explanation via Machine Arguing (by F. Toni, O. Cocarascu and A. Rago)
As AI becomes ever more ubiquitous in our everyday lives, its ability to explain to and interact with humans is evolving into a critical research area. Explainable AI (XAI) has therefore emerged as a popular topic but its research landscape is currently very fragmented. Explanations in the literature have generally been aimed at addressing individual challenges and are often ad-hoc, tailored to specific AIs and/or narrow settings. Further, the extraction of explanations is no simple task; the design of the explanations must be fit for purpose, with considerations including, but not limited to: Is the model or a result being explained? Is the explanation suited to skilled or unskilled explainees? By which means is the information best exhibited? How may users interact with the explanation? As these considerations rise in number, it quickly becomes clear that a systematic way to obtain a variety of explanations for a variety of users and interactions is much needed. In this lecture we will overview recent approaches showing how these challenges can be addressed by utilising various forms of machine arguing from KR as the scaffolding underpinning explanations that are delivered to users. Argumentation is uniquely well-placed as a conduit for information exchange between AI systems and users due to its natural use in debates. The capability of arguing is pervasive in human affairs and arguing is core to a multitude of human activities: humans argue to explain, interact and exchange information. Our lecture will focus on how machine arguing can serve as the driving force of explanations in AI in three different ways, namely: by building explainable systems from scratch with argumentative foundations or by extracting argumentative reasoning from general AI systems or from data thereof. Overall, we will provide a comprehensive review of the methods in the literature for extracting argumentative explanations.
19.00 End of Day 1
DAY 2 - June 25
10.30 Stream Reasoning: From Theory to Practice (by E. Della Valle, E. Falzone and R. Tommasini)
Stream Reasoning is set at the confluence of Artificial Intelligence and Stream Processing with the ambition goal to reason on rapidly changing flows of information. The goals of the lecture are threefold: 1) introducing students to Stream Reasoning state-of-the-art, 2) deep diving into RDF Stream Processing by outlining how to design, develop and deploy a stream reasoning application, and 3) discussing together the limits of the state-of-the-art and understand the current challenges. (For further details, please visit http://streamreasoning.org/events/rw2020.)
12.45 Lunch break
14.00 First-Order Rewritability of Temporal Ontology-Mediated Queries (by M. Zakharyaschev, V. Ryzhikov and P. Wałęga)
In this lecture, we first discuss typical scenarios of querying and analysing timestamped data stored in a database. Then we introduce the paradigm of ontology-based data access with the OWL 2 QL and OWL 2 EL profiles of the Web Ontology Language OWL 2 and the underlying description logics. We also give a brief introduction to various temporal logics such as LTL, MTL and HS. We discuss the existing approaches to designing temporalised extensions of OWL 2 QL and OWL 2 EL that are suitable for querying temporal data efficiently. On the technical side, we focus on the computational complexity of answering temporal ontology-mediated queries.
16.15 Break
16.45 Student talks
Design Pattern-Aware Knowledge Graph Generation with Dependency Graph Embeddings (by Riley Capshaw)
Abstract. Ideas from Natural Language Processing (NLP) and computational linguistics have been used extensively in machine reading, Knowledge Graph (KG) generation, and other tasks which seek to extract machine-usable information from raw text. Conversely, NLP has only mildly benefitted from KG advances, seeing mostly innovation in specific downstream tasks like relation classification. In this talk, I will present my PhD project, which seeks to improve the generation of KGs from raw text by applying KG techniques such as translational relation embedding models to the structured NLP components of end-to-end neural KG generation pipelines. We will also use (ontology) design patterns and requirements to tailor the resulting KG to a particular domain, as well as to provide feedback to the learning process via high-level reasoning. So far, we have shown through structural probing experiments that the task of Semantic Dependency Parsing is compatible with a translational relation model, bringing us closer to harmonizing the techniques and representations used throughout a full pipeline.
Combining Existential Rules with Network Diffusion Processes for Automated Generation of Hypotheses (by Hosé Paredes)
Abstract. Malicious behavior in social media has many faces, which for instance appear in the form of bots, sock puppets, creation and dissemination of fake news, Sybil attacks, and actors hiding behind multiple identities. We propose the NetDER architecture, which takes its name from its two main modules: Network Diffusion and Ontological Reasoning based on existential rules, to address these issues. This initial proposal is meant to serve as a roadmap for research and development of tools to attack malicious behavior in social media, guiding the implementation of software in this domain, instead of a specific solution. Our working hypothesis is that these problems– and many others – can be effectively tackled by (i) combining multiple data sources that are constantly being updated, (ii) maintaining a knowledge base using logic-based formalisms capable of value invention to support generating hypotheses based on available data, and (iii) maintaining a related knowledge base with information regarding how actors are connected, and how information flows across their network. We show how these three basic tenets give rise to a general model that has the further capability of addressing multiple problems at once.
Learning Possibilistic Logic Theories (by Cosimo Damiano Persia)
Abstract. Learning logic theories is an appealing research goal. In many applications, the learned model has to handle contradictory or incomplete knowledge often encountered and abundant in the real world. Possibilistic logic is a formalism capable of coping with partial inconsistency and uncertain knowledge. In this presentation, we will introduce possibilistic logic and Angluin’s exact learning model. We will show how to represent some types of uncertainty in possibilistic logic and discuss learnability results of possibilistic theories in the exact model. We then show whether polynomial time learnability results for classical settings are transferable to the respective possibilistic extension and vice-versa. Since polynomial time learnability results for the exact model are transferable to the probably approximately correct model extended with membership queries, some of our results are transferred to this model.
Translating temporal formulas into alternating automata using Answer Set Programming (by Susana Hahn)
Abstract. Temporal and dynamic extensions of Answer Set Programming (ASP) have played an important role in dealing with planning problems, as they allow for the use of temporal operators to reason with dynamic scenarios in a very efficient way. In this project, we exploit the relationship between linear temporal/dynamic logic on finite traces and automata theory in order to represent temporal constraints from ASP in terms of alternating automata. Our automata-based approach generates a declarative representation of the alternating automaton, which enables two different reasoning tasks: generating traces satisfying a constraint and checking the satisfiability of a given trace. Regarding practical applications, the proposed implementation is currently being tested in different planning domains, such as intra-logistics and automation among multiple robots, providing a very concise way of filtering plans that enforce temporal goals.
Ontology-Based RDF Integration of Heterogeneous Data (by Maxime Buron)
Abstract. The proliferation of heterogeneous data sources in many application contexts brings an urgent need for expressive and efficient data integration mechanisms. There are strong advantages to using RDF graphs as the integration format: being schemaless, they allow for flexible integration of data from heterogeneous sources; RDF graphs can be interpreted with the help of an ontology, describing application semantics; last but not least, RDF enables joint querying of the data and the ontology. To address this need, we formalize RDF Integration Systems (RIS), Ontology Based-Data Access mediators, that go beyond the state of the art in the ability to expose, integrate and flexibly query data from heterogeneous sources through GLAV (global-local-as-view) mappings. We devise several query answering strategies, based on an innovative integration of LAV view-based rewriting and a form of mapping saturation. Our experiments show that one of these strategies brings strong performance advantages, resulting from a balanced use of mapping saturation and query reformulation.
Logic as secretarial assistant (by Ali Farjami)
Abstract. Makinson and van der Torre introduced input/output logic as ‘’secretarial assistant to an arbitrary process transforming propositional inputs into propositional outputs.’’ The only input/output logics investigated in the literature so far are built on top of classical propositional logic and intuitionistic propositional logic. Still, any base logic may act as a secretarial assistant. In this talk, we build the non-adjunctive input/output version of any abstract logic.
Building a Space mission design ontology: automatic terminology and concepts extraction (by Audrey Berquand)
Abstract. Expert Systems, computer programs able to capture human expertise and mimic experts’ reasoning, can support the design of future space missions by assimilating and facilitating access to accumulated knowledge. First, the virtual assistant needs to understand the concepts characterising space systems engineering. In other words, it needs an ontology of space systems. Unfortunately, there is currently no official European space systems ontology. Developing an ontology is a lengthy and tedious process, involving several human domain experts, and therefore prone to human error and subjectivity. Could the foundations of an ontology be instead semi-automatically extracted from unstructured data related to space systems engineering? I will present how, based on the Ontology Learning Layer Cake methodology, we semi-automatically extracted terms and concepts from a large "space systems" corpus, relying on statistical and word embedding methods.
Employing Hybrid Reasoning to Support Clinical Decision-Making (by Sabbir Rashid)
Abstract. Computable biomedical knowledge is typically designed to support deductive forms of reasoning. Through think-aloud studies of clinical use cases in diabetes care, we have found that physicians employ additional types of reasoning, including abduction and induction. In particular, physicians interweave various forms of logic into dynamic reasoning strategies that mirror the Select-Test (S-T) model proposed by Stefanelli and Ramoni. To support a physician's cognitive models, we have chosen to design a clinical knowledge base using the cyclic set of steps in the S-T model. We leverage Semantic Web technologies to encode each step of the S-T model as an AI task that uses a distinct form of reasoning, such as abduction. We then compose the AI tasks into a hybrid-reasoning architecture that can support a particular clinical reasoning strategy, such as differential diagnosis. In doing so, we are constructing various types of novel reasoners that are compatible with description logics. We plan to evaluate the reasoning system and rule representation through testing of clinical use cases that will determine whether system-generated insights align with what domain experts would conclude. This work is particularly relevant to physicians that employ various clinical reasoning strategies and researchers interested in computational reasoning approaches.
18.45 End of Day 2
DAY 3 - June 26
10.30 An Introduction to Answer Set Programming and Some of Its Extensions (by W. Faber)
Answer Set Programming (ASP) is a rule-based language rooted in traditional Logic Programming, Databases, Knowledge Representation, and Nonmonotonic Reasoning. It offers a flexible language for declarative problem solving, with support of efficient general-purpose solvers and reasoners. The inclusion of aggregates in ASP (and Logic Programming at large) has long been motivated, however there are some issues with semantics to be addressed, in particular when aggregates occur in recursive definitions. Very similar considerations are needed when coupling ASP with other formalisms, which we collectively refer to as "generalized atoms". An overview of these semantic challenges and proposals for addressing them is provided, along with an overview of complexity results and system support.
12.45 Lunch break
14.00 Declarative Data Analysis using Limit Datalog Programs (by E. V. Kostylev)
Currently, data analysis tasks are often solved using code written in standard imperative programming languages such as Java and Scala. However, in recent years there has been a significant shift towards declarative solutions, where the definition of the task is clearly separated from its implementation, and users describe what the desired output is, rather than how to compute it. For example, instead of computing shortest paths in a graph by a concrete algorithm, one first describes what a path length is and then selects only paths of minimum length. Such specification is independent of evaluation details, allowing analysts to focus on the task at hand rather than implementation details. In this lecture we will give an overview of Limit Datalog, a recent declarative query language for data analysis. This language extends usual Datalog with integer arithmetic and aggregation to naturally capture data analytics tasks, but at the same time carefully restricts the interaction of recursion and arithmetic to preserve decidability of reasoning. Besides the full language, we will also discuss several fragments of Limit Datalog, with various complexity and expressivity.
16.15 Break
16.45 Knowledge Graphs: Research Directions (by A. Hogan)
The core idea behind knowledge graphs is to collect, represent and expose knowledge using a graph abstraction, which allows to integrate data from diverse sources in a flexible manner. This core idea is not new, having been explored for decades in works relating to graph databases, ontologies, data integration, graph algorithms, information extraction, and more besides. However, knowledge graphs as a research topic is increasingly becoming a confluence of these related areas, further attracting attention from new areas, in particular machine learning. From this confluence emerge a variety of open questions relating to how techniques traditionally addressed by different communities can be combined. In this talk, we will first look at how knowledge graphs are being used in practice, and at why they are receiving so much attention. We will discuss various graph models that can be used to represent them and query languages used to interrogate them. We will discuss the importance of schemata for imposing structure on knowledge graphs, and of ontologies and rules for defining their semantics. We will address the importance of context, and how it can be modelled and reasoned about. We will then look at the role of inductive methods for knowledge graphs, starting with graph analysis frameworks, before turning to machine learning techniques such as graph embeddings, graph neural networks and inductive logic programming. Having established the different research trends in knowledge graphs, we will then motivate and discuss some open questions relating to how these diverse techniques can be combined in a principled way.
19.00 Closing
19.30 End of the school
Thanks to our partners and sponsors