In this project, we build on top of the idea of using logs to examine the behaviour of a software system by applying it to multi-agent systems with a novel angle which is to include multiple levels of explanation generated from the same set of logs. Commonly, explainability in agent systems is achieved by focusing on a single agent that produces a single explanation for a single purpose. Our research introduces a different approach by presenting an explainability framework for agents and multi-agent systems that deals with multiple levels of abstraction that can be used for different purposes by different classes of users.
Publications
-
E. Yan, S. Burattini, J. F. H ̈ubner, and A. Ricci, “Towards a multi-level explainability framework for engineering and understanding bdi agent systems,” in Proceedings of the 24th Workshop “From Objects to Agents” (WOA 2023), R. Falcone, C. Castelfranchi, A. Sapienza, and F. Cantucci, Eds., ser. CEUR Workshop Proceedings, vol. 3579, Rome: CEUR-WS.org, 2023. https://ceur-ws.org/Vol-3579/paper17.pdf.
-
Yan, Elena (2023) A multi level explainability framework for BDI Multi Agent Systems. [Laurea magistrale], Università di Bologna, Corso di Studio in Ingegneria e scienze informatiche [LM-DM270] - Cesena. https://amslaurea.unibo.it/29644/