Vladimir Lifschitz studied mathematics and logic in Soviet Russia, and emigrated to the United States in 1976. Later on his research interests turned to commonsense reasoning, knowledge representation and declarative logic programming, and more recently to foundations of answer set programming. Lifschitz is a AAAI Fellow and Professor Emeritus of Computer Science at the University of Texas at Austin.
When do we think of two logic programs as equivalent? That may depend on how these programs are meant to be used: which of their symbols are going to represent the input; which inputs are considered permissible; which symbols represent the output. To formalize and automate reasoning about equivalence of programs, we employ “user guides”--formal expressions that encode assumptions about using the program. ANTHEM is a proof assistant that helps verify equivalence of logic programs with respect to a user guide, with emphasis on constructs used in answer set programming. It operates by transforming an equivalence claim into a series of first-order reasoning problems and submitting them to a resolution theorem prover. It has been designed and implemented by researchers at Potsdam University, the University of Nebraska Omaha, and the University of Texas at Austin.
Esra Erdem is a professor in computer science and engineering at Sabanci University, Istanbul. She received her Ph.D. in computer sciences at the University of Texas at Austin, and carried out postdoctoral research at the University of Toronto and Vienna University of Technology. Her research is in the area of artificial intelligence, in particular, the mathematical foundations of knowledge representation and automated reasoning, and their applications to various domains, including robotics, bioinformatics, logistics, and economics. Dr. Erdem was a general co-chair of ICLP 2013, a program co-chair of ICLP 2019, KR 2020 and PADL 2025, the general chair of KR 2021, and the president of KR Inc. She is an associate editor for Artificial Intelligence (AIJ) and Theory and Practice of Logic Programming (TPLP).
In robotic construction problems, multiple robots rearrange stacks of prefabricated blocks to build stable structures. Even though automation can improve the efficiency and the productivity of certain construction tasks, the design of the structure to be built, planning of the robot motions, and proper ordering of robot actions are still decided manually in these approaches. These problems are challenging for AI planning due to ramifications of actions, true concurrency, and requirements of supportedness of blocks by a surface or a robot and stability of the overall structure at all times. In this talk, we will present a general method to solve a wide range of robotic construction problems, based on Answer Set Programming integrated with state-of-the-art sampling-based motion planners and simulation-based physics engines. We will illustrate the usefulness and applicability of this hybrid method over a set of challenging construction benchmark instances, using a bimanual Baxter robot.
Stefania Costantini is a Full Professor of Computer Science at the University of L’Aquila, where she leads the AAAI@AQ research group in Artificial Intelligence. A graduate of the University of Pisa (cum laude, 1983), she has made significant contributions to logic programming, multi-agent systems, meta-reasoning, and ethics in AI. She developed the DALI agent language and has worked extensively on epistemic and temporal logics, Answer Set Programming, and neuro-symbolic systems. Included in the 2024 lists of the most influential women and Italians in AI, and in Stanford’s “2% World Top Scientists” (2023–2024), she is the author of over 200 international publications. She has led major national research projects such as ENABLE and TrustPACTX and served in key academic roles including doctoral program coordinator and department deputy director. Prof. Costantini is a Past-President of GULP, a board member of AIxIA and ALP, and a frequent evaluator for EU and national institutions. She regularly speaks on AI in academic and public settings.
This talk presents an exploration of the interplay between Answer Set Programming (ASP) and epistemic logics, aiming to clarify how their integration can extend the expressive power and adaptability of logic-based systems in dynamic settings. ASP has demonstrated significant strengths in modelling complex static constraints and preferences, particularly in domains such as healthcare resource allocation. However, in environments where the problem specification is subject to continual change, due to evolving preferences, partial observability, or real-time disruptions, ASP solutions may face limitations.
We focus on the integration of ASP with L-DINF, an epistemic logic framework that supports reasoning about beliefs, intentions, group dynamics, and agentive capabilities. L-DINF enables the modelling of agents that can revise their mental state, explain their actions, and coordinate based on shared or conflicting knowledge. These features allow us to introduce a dynamic layer on top of the static ASP kernel, thus retaining the optimisation and constraint satisfaction capabilities of ASP while adding the expressiveness necessary for adaptive reasoning.
As a running example, we examine the progression from an ASP-based scheduling system relying on structured "Blueprint Personas" to a hybrid architecture where personas are translated into epistemic agents. This shift allows for belief-aware rescheduling, intention-driven behaviour, and group coordination, essential elements in domains requiring responsiveness and explainability.
We conclude by outlining a general methodology for integrating epistemic reasoning with ASP, and reflect on the potential applications of such hybrid architectures, including in the design of logic-based agents, possibly augmented with large language models. The seminar aims to open a dialogue on how logic programming can remain central in the construction of cognitively rich, explainable, and dynamically capable systems.
Georg Gottlob is a Professor of Computer Science at the University of Calabria and a Professor Emeritus at Oxford University and TU Wien. Until recently, he was a Royal Society Research Professor at Oxford,and a Fellow of Oxford's St John's College and an Adjunct Professor at TU Wien. His interests include knowledge representation, database theory, query processing, web data extraction, and (hyper)graph decomposition techniques. Gottlob has received the Wittgenstein Award from the Austrian National Science Fund and the Ada Lovelace Medal in the UK. He is an ACM Fellow, an ECCAI Fellow, a Fellow of the Royal Society, and a member of the Austrian Academy of Sciences, the German National Academy of Sciences, and the Academia Europaea. He chaired the Program Committees of IJCAI 2003 and ACM PODS 2000, is on the Editorial Board of JCSS, and was on the Editorial Board of and was on the Editorial Boards of JACM and CACM. He was a founder of Lixto, a web data extraction firm acquired in 2013 by McKinsey & Company. In 2015 he co-founded Wrapidity, a spin out of Oxford University based on fully automated web data extraction technology developed in the context of an ERC Advanced Grant. Wrapidity was acquired by Meltwater, an internationally operating media intelligence company. Gottlob then co-founded the Oxford spin-out DeepReason.AI, which provided knowledge graph and rule-based reasoning software to customers in various industries. DeepReason.AI was also acquired by Meltwater.
ChatGPT and other LLMs are the most recent major outcome of the ongoing Ai revolution. The talk begins with a brief discussion of such (text-based) generative AI tools and showcases instances where these models excel, namely when it comes to generating beautifully composed texts. We then discuss shortcomings of LLM with regard to their use for constructing and curating data and knowledge bases or knowledge graphs, where they often produce erroneous information. The latter is often the case when LLMs are prompted for data that are not already present in Wikipedia or other authoritative Web sources, thus, where a judgmental decision is required. To understand why so many errors and "hallucinations" occur, we report about our findings about the "psychopathology of everyday prompting" and identify and illustrate several key reasons for potential failures in language models, which include, but are not limited to: (i) information loss due to data compression, (ii) training bias, (iii) the incorporation of incorrect external data, (iv) the misordering of results, and (v) the failure to detect and resolve logical inconsistencies contained in a sequence of LLM-generated prompt-answers. In the second part of the talk, we give a survey of Chat2Data project, which endeavors to leverage language models for the automated verification and enhancement of relational databases and knowledge graphs, all while mitigating the pitfalls (i)–(v) mentioned earlier. Finally we issue recent results on, amd application of, LLM-based rule generation.