RW 2020
The 16th Reasoning Web Summer School
24-26 June 2020, Virtual
Declarative AI 2020 goes virtual by keeping the planned dates and extending the submission deadlines!
The registration is free of charge.
RW 2020
Lectures
Stream Reasoning: From Theory to Practice
Stream Reasoning is set at the confluence of Artificial Intelligence and Stream Processing with the ambition goal to reason on rapidly changing flows of information. The goals of the lecture are threefold: 1) introducing students to Stream Reasoning state-of-the-art, 2) deep diving into RDF Stream Processing by outlining how to design, develop and deploy a stream reasoning application, and 3) discussing together the limits of the state-of-the-art and understand the current challenges. (For further details, please visit http://streamreasoning.org/events/rw2020.)
Emanuele Della Valle holds a PhD in Computer Science from the Vrije Universiteit Amsterdam and a Master degree in Computer Science and Engineering from Politecnico di Milano. He is an assistant professor at the Department of Electronics, Information and Bioengineering of Politecnico di Milano. In 20 years of research, his research interests covered Big Data, Stream Processing, Semantic technologies, Data Science, Web Information Retrieval, and Service Oriented Architectures. He started the Stream Reasoning research field positioning it at the intersection between Stream Processing and Artificial Intelligence. His work on Stream Reasoning was applied in analysing Social Media, Mobile Telecom and IoT data streams in collaboration with Telecom Italia, IBM, Siemens, Oracle, Indra, and Statoil. With the experience he gained, he started two companies to create data-centric products and services. He co-authored 22 journal articles, 33 conference papers in major conferences, 3 books, and more than 70 other manuscripts including minor conferences, book chapters, workshop papers and posters. He is a member of the editorial board of Journal of Web Semantics.
Emanuele Falzone is a Ph.D. student at Politecnico di Milano, at Department of Electronics, Computer and Bioengineering, under the supervision of Prof. Emanuele Della Valle, since November 2019. His research interest is mainly Stream Processing. He received his M.Sc. degree in Computer Science from Politecnico di Milano in December 2018. He graduated in B.Sc. in Computer Engineering from Politecnico di Milano in September 2016.
Riccardo Tommasini is an assistant professor at the University of Tartu, Estonia. Riccardo did his PhD at the Department of Electronics, Information and Bioengineering of Politecnico di Milano. His thesis, titled "Velocity on the Web", investigates the velocity aspects that concern the variety of information that populates the Web environment. His research interests span Stream Processing, Knowledge Graphs, Logics and Programming Languages. Riccardo's tutorial activities comprise Stream Reasoning Tutorials at ISWC 2017, ICWE 2018, ESWC 2019, and TheWebConf 2019, and DEBS 2019.
An Introduction to Answer Set Programming and Some of Its Extensions
Answer Set Programming (ASP) is a rule-based language rooted in traditional Logic Programming, Databases, Knowledge Representation, and Nonmonotonic Reasoning. It offers a flexible language for declarative problem solving, with support of efficient general-purpose solvers and reasoners. The inclusion of aggregates in ASP (and Logic Programming at large) has long been motivated, however there are some issues with semantics to be addressed, in particular when aggregates occur in recursive definitions. Very similar considerations are needed when coupling ASP with other formalisms, which we collectively refer to as "generalized atoms". An overview of these semantic challenges and proposals for addressing them is provided, along with an overview of complexity results and system support.
Wolfgang Faber serves as Professor of Semantic Systems at the University of Klagenfurt (Austria). Before that, he was a Professor at the University of Huddersfield (UK), an Associate Professor at the University of Calabria (Italy), and an Assistant Professor at the Vienna University of Technology (Austria), where he also obtained his PhD in 2002. From 2004 to 2006 he was on an APART grant of the Austrian Academy of Sciences. His general research interests are in knowledge representation, logic programming, nonmonotonic reasoning, planning, and knowledge-based agents. He has published more than 100 refereed articles in major journals, collections, and conference proceedings. He was one of the architects of DLV, a system for computing answer sets of disjunctive deductive databases, which was and still is used all over the world. He has acted as a chair for several workshops and conferences, has been on the program committees of many of the major conferences of his research areas, and has served on the editorial board and as a reviewer for many journals and conferences on Artificial Intelligence, Knowledge Representation, and Logic Programming.
Knowledge Graphs: Research Directions
The core idea behind knowledge graphs is to collect, represent and expose knowledge using a graph abstraction, which allows to integrate data from diverse sources in a flexible manner. This core idea is not new, having been explored for decades in works relating to graph databases, ontologies, data integration, graph algorithms, information extraction, and more besides. However, knowledge graphs as a research topic is increasingly becoming a confluence of these related areas, further attracting attention from new areas, in particular machine learning. From this confluence emerge a variety of open questions relating to how techniques traditionally addressed by different communities can be combined. In this talk, we will first look at how knowledge graphs are being used in practice, and at why they are receiving so much attention. We will discuss various graph models that can be used to represent them and query languages used to interrogate them. We will discuss the importance of schemata for imposing structure on knowledge graphs, and of ontologies and rules for defining their semantics. We will address the importance of context, and how it can be modelled and reasoned about. We will then look at the role of inductive methods for knowledge graphs, starting with graph analysis frameworks, before turning to machine learning techniques such as graph embeddings, graph neural networks and inductive logic programming. Having established the different research trends in knowledge graphs, we will then motivate and discuss some open questions relating to how these diverse techniques can be combined in a principled way.
Aidan Hogan is currently an Assistant Professor at the Department of Computer Science, University of Chile, and an Associate Research at the Millennium Institute for Foundational Research on Data (IMFD). He received his PhD in 2011 from the National University of Ireland, Galway. His research interests relate primarily to the Semantic Web, Databases and Information Extraction. He has been invited as a lecturer to six summer schools and he has co-organised three summer schools, mostly recent a summer school in Cuba on Knowledge Graphs. For further information, see his homepage: http://aidanhogan.com/
Declarative Data Analysis using Limit Datalog Programs
Currently, data analysis tasks are often solved using code written in standard imperative programming languages such as Java and Scala. However, in recent years there has been a significant shift towards declarative solutions, where the definition of the task is clearly separated from its implementation, and users describe what the desired output is, rather than how to compute it. For example, instead of computing shortest paths in a graph by a concrete algorithm, one first describes what a path length is and then selects only paths of minimum length. Such specification is independent of evaluation details, allowing analysts to focus on the task at hand rather than implementation details. In this lecture we will give an overview of Limit Datalog, a recent declarative query language for data analysis. This language extends usual Datalog with integer arithmetic and aggregation to naturally capture data analytics tasks, but at the same time carefully restricts the interaction of recursion and arithmetic to preserve decidability of reasoning. Besides the full language, we will also discuss several fragments of Limit Datalog, with various complexity and expressivity.
Egor V. Kostylev is a departmental lecturer and a member of the Information Systems group at the Department of Computer Science, University of Oxford. He obtained his PhD in Physics and Mathematics in 2009 from the Lomonosov Moscow State University. From 2010 to 2013, when he took the post at Oxford, he was a researcher at the database group at the School of Informatics at the University of Edinburgh. His research interests are in the intersection of Knowledge Representation and Reasoning, Databases, and the Semantic Web. He has published more than 50 papers in top conferences and journals of the fields, two of which have been recognised by the community via best paper awards at leading conferences: the International Conference on Database Theory (ICDT) in 2016 and the International Joint Conference on Artificial Intelligence (IJCAI) in 2017.
On the Complexity of Learning Description Logic Ontologies
Computational logic and machine learning are the pillars of artificial intelligence. The former stems from syllogisms while the latter may apply inductive reasoning. Logic provides methods for deriving valid specific conclusions based on a general theory asserted to be true. On the other hand, learning methods can be applied to generalize specific observations. These disciplines naturally complement each other. In this lecture, we will revisit the classical exact and probably approximately correct (PAC) learning models and study how algorithms designed for such models can draw a general theory, formulated in description logic, from specific observations. We will also recall from the literature other approaches that have been proposed for learning description logic ontologies.
Ana Ozaki is an associate professor at the University of Bergen. She has been working on formalisms for knowledge representation and automated reasoning and on learning models from computational learning theory. Before working in Norway, she worked as an assistant professor at the Free University of Bozen-Bolzano; and as a postdoc at the Technische Universität Dresden. She completed her Ph.D. at the University of Liverpool.
Introduction to Probabilistic Ontologies
There is no doubt about it; an accurate representation of a real knowledge domain must be able to capture uncertainty. As the best known formalism for handling uncertainty, probability theory is often called upon this task, giving rise to probabilistic ontologies. Unfortunately, things are not as simple as they might appear, and different choices made can deeply affect the semantics and computational properties of probabilistic ontology languages. In this tutorial, we explore the main design choices available, and the situations in which they may be meaningful or not. We then dive deeper into a specific family of probabilistic ontology languages that can express logical and probabilistic dependencies between axioms.
Rafael Peñaloza is an Associate Professor at the University of Milano-Bicocca, Italy. After obtaining his PhD (Dr. rer. nat.) from TU Dresden, Germany, he started working on formalisms for representing imperfect knowledge; specifically, on the properties of Fuzzy Description Logics (FDL), and later on probabilistic extensions of DLs. His main interest in this direction is the use of probabilities to model uncertainty in expert knowledge through probabilistic ontologies, and the way this uncertainty propagates to the conclusions derivable from them. He has over 100 publications in highly ranked international conferences and journals. He also likes to read and write, and take long walks.
Explanation via Machine Arguing
As AI becomes ever more ubiquitous in our everyday lives, its ability to explain to and interact with humans is evolving into a critical research area. Explainable AI (XAI) has therefore emerged as a popular topic but its research landscape is currently very fragmented. Explanations in the literature have generally been aimed at addressing individual challenges and are often ad-hoc, tailored to specific AIs and/or narrow settings. Further, the extraction of explanations is no simple task; the design of the explanations must be fit for purpose, with considerations including, but not limited to: Is the model or a result being explained? Is the explanation suited to skilled or unskilled explainees? By which means is the information best exhibited? How may users interact with the explanation? As these considerations rise in number, it quickly becomes clear that a systematic way to obtain a variety of explanations for a variety of users and interactions is much needed. In this lecture we will overview recent approaches showing how these challenges can be addressed by utilising various forms of machine arguing from KR as the scaffolding underpinning explanations that are delivered to users. Argumentation is uniquely well-placed as a conduit for information exchange between AI systems and users due to its natural use in debates. The capability of arguing is pervasive in human affairs and arguing is core to a multitude of human activities: humans argue to explain, interact and exchange information. Our lecture will focus on how machine arguing can serve as the driving force of explanations in AI in three different ways, namely: by building explainable systems from scratch with argumentative foundations or by extracting argumentative reasoning from general AI systems or from data thereof. Overall, we will provide a comprehensive review of the methods in the literature for extracting argumentative explanations.
Francesca Toni is Professor in Computational Logic and Deputy Head in the Department of Computing, Imperial College London, UK, and the founder and leader of the CLArg (Computational Logic and Argumentation) research group. Her research interests lie within the broad area of KR and Explainable AI, and in particular include Argumentation, Argument Mining, Logic-Based Multi-Agent Systems, Logic Programming, Non-monotonic/Default/Defeasible Reasoning. She graduated, summa cum laude, in Computing at the University of Pisa, Italy, in 1990, and received her PhD in Computing in 1995, from Imperial College London. She has coordinated two EU projects, received funding from EPSRC (in the UK) and the EU, and awarded a Senior Research Fellowship from The Royal Academy of Engineering and the Leverhulme Trust. She is currently Technical Director of the ROAD2H EPSRC-funded project (www.road2h.org/) and co-Director for the Centres of Doctoral Training in Safe and Trusted AI and in AI for Healthcare. She has co-chaired ICLP2015 (the 31st International Conference on Logic Programming) and KR 2018 (the 16th Conference on Principles of Knowledge Representation and Reasoning). She is a member of the steering committee of and KR, inc. and of AT (Agreement Technologies), corner editor on Argumentation for the Journal of Logic and Computation, and in the editorial board of the Argument and Computation journal and the AI journal.
Oana Cocarascu is a Research Associate and a Teaching Fellow at Imperial College London. She received her PhD from Imperial College in 2019 where she worked at the intersection of Natural Language Processing, Machine Learning, and Argument Mining. Her thesis explored the generation of argumentation frameworks from data that can be used to provide the backbone for explanations in various settings, from explaining review aggregations to explaining classifications. She is one of the co-organisers of the Fact Extraction and Verification (FEVER) workshops. She has served as a reviewer for IJCAI, AAAI, ECAI, and several ACL-affiliated conferences.
Antonio Rago is a postdoctoral researcher on an EPSRC Doctoral Prize Fellowship at Imperial College London. He was awarded a PhD in Computing on “Gradual Evaluation in Argumentation Frameworks: Methods, Properties and Applications” on 1st January 2019 from Imperial. He has published papers at KR, AAAI, IJCAI and AAMAS and in IJAR (among others) and has reviewed papers for AAAI, AIJ, ECAI, etc. He organised the explAIn@Imperial workshop on explainability in AI and is the organiser of the XAI seminar series at Imperial, and has given talks at numerous events such as the Imperial AI Fringe, Institute Francais: Night of Ideas, London Argumentation Forum, Toulouse e-Democracy Summer School, along with the aforementioned conferences and others.
First-Order Rewritability of Temporal Ontology-Mediated Queries
In this lecture, we first discuss typical scenarios of querying and analysing timestamped data stored in a database. Then we introduce the paradigm of ontology-based data access with the OWL 2 QL and OWL 2 EL profiles of the Web Ontology Language OWL 2 and the underlying description logics. We also give a brief introduction to various temporal logics such as LTL, MTL and HS. We discuss the existing approaches to designing temporalised extensions of OWL 2 QL and OWL 2 EL that are suitable for querying temporal data efficiently. On the technical side, we focus on the computational complexity of answering temporal ontology-mediated queries.
Michael Zakharyaschev obtained his BSc and MSc from the Moscow State University, and his PhD and Habilitation in Mathematics from the Novosibirsk University. He was a research scientist at Keldysh Institute of Applied Mathematics (Russian Academy of Sciences), Alexander von Humboldt Foundation fellow (FU Berlin), and Professor of Logic and Computation at King's College London (2001-2005). His research interests are knowledge representation and reasoning (spatial, temporal, etc.), logic in computer science, in particular, modal and description logics.
Vladislav Ryzhikov is a lecturer in computer science at Birkbeck, University of London, UK. Previously, he was a researcher at Free University of Bolzano, Italy. His research interests are in the areas Knowledge Representation and Reasoning, Temporal and Spatial Data, Semantic Web, and Description Logics.
Przemysław Wałęga is a postdoctoral researcher in Department of Computer Science, University of Oxford, working in Knowledge Representation and Reasoning Group. His main research interests include applications of logics to AI, in particular he is interested in temporal and spatial logics, computational complexity or reasoning, and logic programming.
Thanks to our partners and sponsors