LIANDA

Challenges and Adequacy Conditions for Logics in the New Age of Artificial Intelligence

Project Objectives and Achievements

The increasing penetration of AI in business, industry, science and everyday life has given rise to a series of social and ethical concerns that need to be addressed. To improve the accountability of AI systems, it is essential that their solutions, decisions and actions can be clearly explained to experts and non-experts alike. This has brought about a new area of research on explainable AI. It embraces not only machine learning based on neural networks but also symbolic AI including logic-based reasoning systems. Although explainable AI is currently being addressed largely by experts from the AI domain, it is clear that this problem is actually methodological in kind since it concerns the adequacy of explanations and their underlying logical forms.
 
LIANDA is devoted to the methodology of applied logic in artificial intelligence with a focus on explainable AI. The project differs from many other approaches in three main aspects. First, it is interdisciplinary and includes scholars from logic, philosophy, AI and computer science.  Second, for the case of logic-based systems, it follows a well-accepted idea that, besides a primary, logical reasoning system, there should be a simple, secondary logical formalism that is able to trace reasoning steps and provide end-users with adequate explanations. But, unlike other approaches, LIANDA examines not only the quality of the final explanations but also makes accountable the primary logical reasoning that leads to them. So the primary system also becomes subject to appropriate criteria of adequacy.
Third, LIANDA includes an area of applied logic where philosophy and AI have been well-connected for many years: reasoning about knowledge, belief and other related attitudes.
 
LIANDA has raised awareness of the new challenges for logic by organising international workshops
 
- ACLAI 22, held in Madrid, November 3-5, 2022; https://www.dc.fi.udc.es/~cabalar/ACLAI22/
 
- ACLAI 23, held in Málaga, November 2-5, 2023; https://lianda23.uma.es/ACLAI23/
 
These have attracted established scholars as well as early stage researchers presenting new ideas and solutions Project results have been presented in leading scientific journals as well as national and international conferences.
 
The main contributions of LIANDA cover three main areas of research. The first deals with foundational issues arising in logic-based systems, notably in the field of answer set programming (ASP). In this area a preliminary framework has been proposed to describe general adequacy conditions for applied logics in AI. This framework considers adequacy conditions from three perspectives: (a) general requirements for sound logic design, (b) criteria that are specific for the concept being formalised, (c) features that promote explainability and rational acceptability. Moreover different new language extensions of ASP have been studied to deal with specific types of problem solving.
 
The second area of study focuses on explanations in logic-based systems as well as in some areas of machine learning such as decision trees. This research has led to new techniques for providing explanations based on a concept of support graph. It has proposed a preliminary taxonomy of explanation-seeking questions for AI applications.  It has also implemented a software system for explanations that is integrated with the well-known ASP solver Clingo that is freely available. These developments have been applied to two real world problem domains: liver transplantation and the 3D printing of medicines.
 
The third area of LIANDA is devoted to logics for reasoning about knowledge, belief, awareness, obligations, trust and other related attitudes that may be held by individual and also social agents. Again this work is of a foundational character and includes adequacy conditions and challenges for such logical systems, both static as well as dynamic logics for reasoning about action and change. Project members have addressed specific problem areas that include epistemic planning, counterfactual reasoning and argumentation dialogues.
RapidWeaver Icon

Made in RapidWeaver