INVITED SPEAKERS

gabriele

Gabriele Kern-Isberner, Technische Universität Dortmund, Germany


gabriele.kern-isberner@cs.uni-dortmund.de



TITLE: The Relevance of Formal Logics for Cognitive Logics, and Vice Versa


Abstract

Classical
logics like propositional or predicate logic have been considered as the gold standard for rational human reasoning, and hence as a solid, desirable norm on which all human knowledge and decision making should be based, ideally. For instance, Boolean logic was set up as kind of an algebraic framework that should help make rational reasoning computable in an objective way, similar to the arithmetics of numbers. Computer scientists adopted this view to (literally) implement objective knowledge and rational deduction, in particular for AI applications. Psychologists have used classical logics as norms to assess the rationality of human commonsense reasoning. However, both disciplines could not ignore the severe limitations of classical logics, e.g., computational complexity and undecidedness, failures of logic-based AI systems in practice, and lots of psychological paradoxes. Many of these problems are caused by the inability of classical logics to deal with uncertainty in an adequate way. Both disciplines have used probabilities as a way out of this dilemma, hoping that numbers and the Kolmogoroff axioms can do the job (somehow). However, psychologists have been observing also lots of paradoxes here (maybe even more).

So then, are humans hopelessly irrational? Is human reasoning incompatible with formal, axiomatic logics? In the end, should computer-based knowledge and information processing be considered as superior in terms of objectivity and rationality?

Cognitive logics aim at overcoming the limitations of classical logics and resolving the observed paradoxes by proposing logic-based approaches that can model human reasoning consistently and coherently in benchmark examples. The basic idea is to reverse the normative way of assessing human reasoning in terms of logics resp. probabilities, and to use typical human reasoning patterns as norms for assessing the cognitive quality of logics. Cognitive logics explore the broad field of logic-based approaches between the extreme points marked by classical logics and probability theory with the goal to find more suitable logics for AI applications, on the one hand, and to gain more insights into the structures of human rationality, on the other. In particular, the talk features conditionals and preferential nonmonotonic reasoning as a powerful framework to explore characteristics of human rational reasoning.


esra15

Esra Erdem, Sabanci University, Turkey


esra.erdem@sabanciuniv.edu



TITLE: Explanation Generation in Applications of Answer Set Programming


Abstract

As the definition of AI changes towards building rational agents that are provably beneficial for humans, answer set programming (ASP) plays an important role in addressing the user-oriented challenges in applications that come along during this shift, such as generality, flexibility, provability, hybridity, bi-directional interactions, and explainability. In this talk, I will focus on explainability and present different methods for explanation generation in three applications of ASP: complex biomedical queries for drug discovery, multi-modal multi-agent path finding with resource consumption, and robotic plan execution monitoring.


Leon van der Torre

Leon van der Torre, University of Luxembourg


leon.vandertorre@uni.lu



TITLE: Advanced Intelligent Systems and Reasoning


Abstract

We offer a perspective on advanced intelligent systems and reasoning, using as an example morally-decisive robots, as proposed in machine ethics. Given that norms often conflict, formal methods are necessary to resolve these conflicts in order to make morally acceptable or optimal decisions. The underlying basis of current algorithms spans from logical representation and reasoning to machine learning algorithms. Our vision is demonstrated using the argumentation-based Jiminy moral advisor. We also hint at future work that situates ‘real-world’ dialogue exchanges as the forum for discussing moral decisions at the Zhejiang University – University of Luxembourg Joint Lab on Advanced Intelligent Systems and REasoning (ZLAIRE). This presentation is based on joint work and prepared with the help of Beishui Liao, Pere Pardo, Maria Slavkovik and Liuwen Yu.

1588953609438

João Leite, Universidade NOVA de Lisboa, Portugal


TITTLE: Towards Counterfactual Reasoning within Neural Networks


Abstract

Counterfactual reasoning has been shown to be an important tool to comprehend a complex system for which we do not have an interpretable specification. By reasoning about hypothetical scenarios and their consequences, we can gain important insights not only to understand such systems, but also to debug them, ultimately leading to an increased level of trust. Artificial neural networks have been quite successful at performing a myriad of tasks, but belong to such class of systems which do not have an interpretable specification of their models, given that they are based on real-valued tensors without any associated declarative meaning. They are the kind of systems which could greatly benefit from being the object of counterfactual reasoning. In this talk, we describe two recently proposed methods, one to modify a neural networks' perception regarding human-defined concepts and another to map a model's activations into human-defined concepts, and how to combine them to generate counterfactual samples and perform counterfactual reasoning within a neural network model. Through preliminary empirical evaluation, we show that the generated counterfactuals are well interpreted by artificial neural networks, and validate the soundness of the model's counterfactual reasoning.

Hans-Ditmarsch

Hans van Ditmarsch, IRIT, University of Toulouse, France


TITLE: Distributed Knowledge Revisited


Abstract

We review the history and some recent work on what is known since the 1990s as distributed knowledge. Such epistemic group notions are currently getting more and more attention both from the modal logical community and from distributed computing, in various settings with communicating processes or agents. The typical intuition is that if a knows p, and b knows that p implies q, then together they know that q: they have distributed knowledge of q. In order to get to know q they need to share their knowledge. We will discuss: (i) the complete axiomatization, (ii) why not everything that is distributed knowledge can become common knowledge, (iii) the notion of collective bisimulation, (iv) distributed knowledge for infinitely many agents, (v) the novel update called resolving distributed knowledge and some variations (and its incomparable update expressivity to action models), (vi) distributed knowledge that is stronger than the sum of individual knowledge (where the relation for the group of agents is strictly contained in their intersection), (vii) common distributed knowledge and its topological interpretations, (viii) dynamic distributed knowledge, a version of the semantics ensuring that what is distributed knowledge becomes common knowledge.


08-tomi-janhunen

Tomi Janhunen, Tampere University, Finland


Title: Explaining with Short Boolean Formulas in Practice


Abstract:

In this work, we investigate explainability in terms of short Boolean formulas in the context of data models based on unary relations. An explanation is a Boolean formula (of length k) that minimizes error with respect to a target attribute being explained. On the theoretical side, we provide novel quantitative bounds on expected error in this scenario. Besides this, we also demonstrate explanations in practice by studying three concrete data sets. For each set, we discover explanation formulas of different lengths using an encoding in Answer Set Programming. The most accurate formulas achieve errors similar to other machine learning methods on the same data sets. Due to potential overfitting, however, these formulas are not ideal as explanations, so we use a cross validation scheme to figure out a suitable length for explanations. By limiting to shorter formulas, it is possible to avoid overfitting while the respective explanations are still reasonably accurate and also, most importantly, human interpretable by nature. This talk is based on joint work with Reijo Jaakkola, Antti Kuusisto, Masood F. Rankooh, and Miikka Vilander.

RapidWeaver Icon

Made in RapidWeaver