INVITED SPEAKERS
Gabriele Kern-Isberner, Technische Universität Dortmund, Germany
gabriele.kern-isberner@cs.uni-dortmund.de
TITLE: The Relevance of Formal Logics for Cognitive Logics, and Vice Versa
Abstract
Classical logics like propositional or predicate logic have been considered as the gold standard for rational human reasoning, and hence as a solid, desirable norm on which all human knowledge and decision making should be based, ideally. For instance, Boolean logic was set up as kind of an algebraic framework that should help make rational reasoning computable in an objective way, similar to the arithmetics of numbers. Computer scientists adopted this view to (literally) implement objective knowledge and rational deduction, in particular for AI applications. Psychologists have used classical logics as norms to assess the rationality of human commonsense reasoning. However, both disciplines could not ignore the severe limitations of classical logics, e.g., computational complexity and undecidedness, failures of logic-based AI systems in practice, and lots of psychological paradoxes. Many of these problems are caused by the inability of classical logics to deal with uncertainty in an adequate way. Both disciplines have used probabilities as a way out of this dilemma, hoping that numbers and the Kolmogoroff axioms can do the job (somehow). However, psychologists have been observing also lots of paradoxes here (maybe even more).
So then, are humans hopelessly irrational? Is human reasoning incompatible with formal, axiomatic logics? In the end, should computer-based knowledge and information processing be considered as superior in terms of objectivity and rationality?
Cognitive logics aim at overcoming the limitations of classical logics and resolving the observed paradoxes by proposing logic-based approaches that can model human reasoning consistently and coherently in benchmark examples. The basic idea is to reverse the normative way of assessing human reasoning in terms of logics resp. probabilities, and to use typical human reasoning patterns as norms for assessing the cognitive quality of logics. Cognitive logics explore the broad field of logic-based approaches between the extreme points marked by classical logics and probability theory with the goal to find more suitable logics for AI applications, on the one hand, and to gain more insights into the structures of human rationality, on the other. In particular, the talk features conditionals and preferential nonmonotonic reasoning as a powerful framework to explore characteristics of human rational reasoning.
Esra Erdem, Sabanci University, Turkey
esra.erdem@sabanciuniv.edu
TITLE: Explanation Generation in Applications of Answer Set Programming
Abstract
As the definition of AI changes towards building rational agents that are provably beneficial for humans, answer set programming (ASP) plays an important role in addressing the user-oriented challenges in applications that come along during this shift, such as generality, flexibility, provability, hybridity, bi-directional interactions, and explainability. In this talk, I will focus on explainability and present different methods for explanation generation in three applications of ASP: complex biomedical queries for drug discovery, multi-modal multi-agent path finding with resource consumption, and robotic plan execution monitoring.
Leon van der Torre, University of Luxembourg
leon.vandertorre@uni.lu
TITLE: Advanced Intelligent Systems and Reasoning
Abstract
We offer a perspective on advanced intelligent systems and reasoning, using as an example morally-decisive robots, as proposed in machine ethics. Given that norms often conflict, formal methods are necessary to resolve these conflicts in order to make morally acceptable or optimal decisions. The underlying basis of current algorithms spans from logical representation and reasoning to machine learning algorithms. Our vision is demonstrated using the argumentation-based Jiminy moral advisor. We also hint at future work that situates ‘real-world’ dialogue exchanges as the forum for discussing moral decisions at the Zhejiang University – University of Luxembourg Joint Lab on Advanced Intelligent Systems and REasoning (ZLAIRE). This presentation is based on joint work and prepared with the help of Beishui Liao, Pere Pardo, Maria Slavkovik and Liuwen Yu.
João Leite, Universidade NOVA de Lisboa, Portugal
TITTLE: Towards Counterfactual Reasoning within Neural Networks
Abstract
Counterfactual reasoning has been shown to be an important tool to comprehend a complex system for which we do not have an interpretable specification. By reasoning about hypothetical scenarios and their consequences, we can gain important insights not only to understand such systems, but also to debug them, ultimately leading to an increased level of trust. Artificial neural networks have been quite successful at performing a myriad of tasks, but belong to such class of systems which do not have an interpretable specification of their models, given that they are based on real-valued tensors without any associated declarative meaning. They are the kind of systems which could greatly benefit from being the object of counterfactual reasoning. In this talk, we describe two recently proposed methods, one to modify a neural networks' perception regarding human-defined concepts and another to map a model's activations into human-defined concepts, and how to combine them to generate counterfactual samples and perform counterfactual reasoning within a neural network model. Through preliminary empirical evaluation, we show that the generated counterfactuals are well interpreted by artificial neural networks, and validate the soundness of the model's counterfactual reasoning.