Publications
See my Google Scholar page for a complete and up-to-date list. If you are interested in my work, please feel free to drop me an email: elena.yan@emse.fr
Theses
Telemedicina e Wearable Computing a supporto del personale sanitario per la diagnosi dell’ictus: il progetto TeleStroke come caso di studio
Elena
Yan
University of Bologna, 2021
Il rapido sviluppo della tecnologia ha permesso di introdurre nuovi strumenti e opportunità nell’ambito sanitario. In particolare, la telemedicina ne ha trovato un ampio utilizzo negli ultimi anni, consentendo la connessione e la comunicazione fra medici, professionisti e pazienti situati in località differenti e di superare le problematiche legate alla scarsa distribuzione delle risorse nel territorio. È in tale ambito che si inserisce il progetto “TeleStroke”, un sistema di teleconsulto e telecooperazione a supporto del personale sanitario per la gestione dell’ictus celebrale. In particolare, il sistema avvalendosi di strumenti standardizzati come l’NIHSS, aiuta i medici a quantificare la severità dell’ictus ed a determinare l’iter da intraprendere per il paziente. Inoltre, l’utilizzo di dispositivi wearable, in particolare gli smartglasses, permette al personale di interagire con il dispositivo in modalità hands-free e di poter concentrarsi alle mansioni per la gestione del paziente. La presente tesi si pone come obiettivo di proseguire il lavoro del progetto “TeleStroke”, con una maggiore attenzione sulla parte wearable, aggiungendo nuove funzionalità al fine di potenziare e migliorare il sistema.
A multi level explainability framework for BDI Multi Agent Systems
Elena
Yan
University of Bologna, 2023
As software systems become more complex and the level of abstraction increases, programming and understanding behaviour become more difficult. This is particularly evident in autonomous systems that need to be resilient to change and adapt to possibly unexpected problems, as there are not yet mature tools for managing understanding. A complete understanding of the system is indispensable at every stage of software development, starting with the initial requirements analysis by experts in the field, through to development, implementation, debugging, testing, and product validation. A common and valid approach to increasing understandability in the field of Explainable AI is to provide explanations that can convey the decision making processes and the motivations behind the choices made by the system. Motivated by the different use cases of the explanation and the different classes of target users, it is necessary to deal with different levels of abstraction in the generated explanations since they target specific classes of users with different requirements and goals. This thesis introduces the idea of multi-level explainability as a way to generate different explanations for the same systems at different levels of detail. A low-level explanation related to detailed code could help developers in the debugging and testing phases, while a high-level explanation could support domain experts and designers or contribute to the validation phase to align the system with the requirements. The model taken as a reference for the automatic generation of explanations is the BDI (Belief-Desire-Intention) model, as it would be easier for humans to understand the mentalistic explanation of a system that behaves rationally given its desires and current beliefs. In this work we have prototyped an explainability tool for BDI agents and multi-agent systems that deals with multiple levels of abstraction that can be used for different purposes by different classes of users.
Journal Articles
A multi-level explainability framework for engineering and understanding
BDI agents
Elena
Yan, Samuele
Burattini, Jomi Fred
Hübner, and Alessandro
Ricci
Autonomous Agents and Multi-Agent Systems, 2025
doi: 10.1007/S10458-025-09689-6
As the complexity of software systems rises, explainability - i.e. the ability of systems to provide explanations of their behaviour - becomes a crucial property. This is true for any AI-based systems, including autonomous systems that exhibit decision making capabilities such as multi-agent systems. Although explainability is generally considered useful to increase the level of trust for end-users, we argue it is also an interesting property for software engineers, developers, and designers to debug and validate the system’s behaviour. In this paper, we propose a multi-level explainability framework for BDI agents to generate explanations of a running system from logs at different levels of abstraction, tailored to different users and their needs. We describe the mapping from logs to explanations, and present a prototype tool based on the JaCaMo platform which implements the framework.
Conference Papers
Towards a Multi-Level Explainability Framework for Engineering and
Understanding BDI Agent Systems
Elena
Yan, Samuele
Burattini, Jomi Fred
Hübner, and Alessandro
Ricci
In Proceedings of the 24th Workshop "From Objects to Agents", Roma, Italy,
November 6-8, 2023, 2023
Explainability is more and more considered a crucial property to be featured by AI-based systems, including those engineered in terms of agents and multi-agent systems. This property is primarily important at the user level, to increase e.g., system trustworthiness, but can play an important role also at the engineering level, to support activities such as debugging and validation. In this paper, we focus on BDI agent systems and introduce a multi-level explainability framework to understand the system’s behaviour that targets different classes of users: developers who implement the system, software designers who verify the soundness and final users. A prototype implementation of a tool based on the JaCaMo platform for multi-agent systems is adopted to explore the idea in practice.
An Agent-Centric Perspective on Norm Enforcement and Sanctions
Elena
Yan, Luis G.
Nardin, Jomi F.
Hübner, and Olivier
Boissier
In Coordination, Organizations, Institutions, Norms, and Ethics for Governance of Multi-Agent Systems XVII, 2025
doi: 10.1007/978-3-031-82039-7_6
In increasingly autonomous and highly distributed multi-agent systems, centralized coordination becomes impractical and raises the need for governance and enforcement mechanisms from an agent-centric perspective. In our conceptual view, sanctioning norm enforcement is part of this agent-centric perspective and they aim at promoting norm compliance while preserving agents’ autonomy. The few works dealing with sanctioning norm enforcement and sanctions from the agent-centric perspective present limitations regarding the representation of sanctions and the comprehensiveness of their norm enforcement process. To address these drawbacks, we propose the NPL(s), an extension of the NPL normative programming language enriched with the representation of norms and sanctions as first-class abstractions. We also propose a BDI normative agent architecture embedding an engine for processing the NPL(s) language and a set of capabilities for approaching more comprehensively the sanctioning norm enforcement process. We apply our contributions in a case study for improving the robustness of agents’ decision-making in a production automation system.
Preprints
An Agent-Centric Perspective on Norm Enforcement and Sanctions
Elena
Yan, Luis G.
Nardin, Jomi F.
Hübner, and Olivier
Boissier
2024
doi: 10.48550/ARXIV.2403.15128
In increasingly autonomous and highly distributed multi-agent systems, centralized coordination becomes impractical and raises the need for governance and enforcement mechanisms from an agent-centric perspective. In our conceptual view, sanctioning norm enforcement is part of this agent-centric perspective and they aim at promoting norm compliance while preserving agents’ autonomy. The few works dealing with sanctioning norm enforcement and sanctions from the agent-centric perspective present limitations regarding the representation of sanctions and the comprehensiveness of their norm enforcement process. To address these drawbacks, we propose the NPL(s), an extension of the NPL normative programming language enriched with the representation of norms and sanctions as first-class abstractions. We also propose a BDI normative agent architecture embedding an engine for processing the NPL(s) language and a set of capabilities for approaching more comprehensively the sanctioning norm enforcement process. We apply our contributions in a case study for improving the robustness of agents’ decision-making in a production automation system.