Presentation
The rapid advances of Artificial Intelligence and the widespread diffusion of AI systems today create new challenges for logics and systems of formal reasoning. Besides its traditional role as a vehicle for knowledge representation and reasoning in symbolic AI, logic should also play a pivotal role in furthering responsible and trustworthy AI through its capacity to reason about intelligent systems, analyse their behaviour and provide explanations for the results obtained and decisions and actions taken by such systems.
If logic is to help make AI more accountable, logic itself needs to be accountable. It should therefore be susceptible to critical examination and evaluation of its adequacy for the objectives which its designers and users have set it.
Following a previous workshop held in Madrid, November 3-5, 2022 (see https://www.dc.fi.udc.es/~cabalar/ACLAI22/), ACLAI 23 will be devoted to challenges and adequacy conditions for logics in light of their important role in contributing to accountable AI. It will be supported by the Spanish project LIANDA (BBVA Foundation) and the University of Málaga and also supported by the Asociación Española de Inteligencia artificial (AEPIA), Association of Logic Programming (ALP) and the Sociedad de Lógica, Metodología y Filosofía de la Ciencia en España (SLMFCE).
The workshop will host a range of talks and discussions that include technical work on current systems of logic for AI as well as philosophical reflections on logical methodology, besides examining external desiderata for logics that arise for instance from legal or ethical requirements for AI systems.
Submissions may include original research, systems descriptions, and position papers. It is planned to publish selected contributions as post-proceedings in a special issue of the Journal of Applied Nonclassical Logics (Ed. Andres Herzig).