Publications
See my Google Scholar page for a complete and up-to-date list. If you are interested in my work, please feel free to drop me an email: elena.yan@emse.fr
Theses
Telemedicina e Wearable Computing a supporto del personale sanitario per la diagnosi dell’ictus: il progetto TeleStroke come caso di studio
Elena
Yan
University of Bologna, 2021
Il rapido sviluppo della tecnologia ha permesso di introdurre nuovi strumenti e opportunità nell’ambito sanitario. In particolare, la telemedicina ne ha trovato un ampio utilizzo negli ultimi anni, consentendo la connessione e la comunicazione fra medici, professionisti e pazienti situati in località differenti e di superare le problematiche legate alla scarsa distribuzione delle risorse nel territorio. È in tale ambito che si inserisce il progetto “TeleStroke”, un sistema di teleconsulto e telecooperazione a supporto del personale sanitario per la gestione dell’ictus celebrale. In particolare, il sistema avvalendosi di strumenti standardizzati come l’NIHSS, aiuta i medici a quantificare la severità dell’ictus ed a determinare l’iter da intraprendere per il paziente. Inoltre, l’utilizzo di dispositivi wearable, in particolare gli smartglasses, permette al personale di interagire con il dispositivo in modalità hands-free e di poter concentrarsi alle mansioni per la gestione del paziente. La presente tesi si pone come obiettivo di proseguire il lavoro del progetto “TeleStroke”, con una maggiore attenzione sulla parte wearable, aggiungendo nuove funzionalità al fine di potenziare e migliorare il sistema.
A multi level explainability framework for BDI Multi Agent Systems
Elena
Yan
University of Bologna, 2023
As software systems become more complex and the level of abstraction increases, programming and understanding behaviour become more difficult. This is particularly evident in autonomous systems that need to be resilient to change and adapt to possibly unexpected problems, as there are not yet mature tools for managing understanding. A complete understanding of the system is indispensable at every stage of software development, starting with the initial requirements analysis by experts in the field, through to development, implementation, debugging, testing, and product validation. A common and valid approach to increasing understandability in the field of Explainable AI is to provide explanations that can convey the decision making processes and the motivations behind the choices made by the system. Motivated by the different use cases of the explanation and the different classes of target users, it is necessary to deal with different levels of abstraction in the generated explanations since they target specific classes of users with different requirements and goals. This thesis introduces the idea of multi-level explainability as a way to generate different explanations for the same systems at different levels of detail. A low-level explanation related to detailed code could help developers in the debugging and testing phases, while a high-level explanation could support domain experts and designers or contribute to the validation phase to align the system with the requirements. The model taken as a reference for the automatic generation of explanations is the BDI (Belief-Desire-Intention) model, as it would be easier for humans to understand the mentalistic explanation of a system that behaves rationally given its desires and current beliefs. In this work we have prototyped an explainability tool for BDI agents and multi-agent systems that deals with multiple levels of abstraction that can be used for different purposes by different classes of users.
Journal Articles
A multi-level explainability framework for engineering and understanding
BDI agents
Elena
Yan, Samuele
Burattini, Jomi Fred
Hübner, and Alessandro
Ricci
Autonomous Agents and Multi-Agent Systems, 2025
doi: 10.1007/S10458-025-09689-6
As the complexity of software systems rises, explainability - i.e. the ability of systems to provide explanations of their behaviour - becomes a crucial property. This is true for any AI-based systems, including autonomous systems that exhibit decision making capabilities such as multi-agent systems. Although explainability is generally considered useful to increase the level of trust for end-users, we argue it is also an interesting property for software engineers, developers, and designers to debug and validate the system’s behaviour. In this paper, we propose a multi-level explainability framework for BDI agents to generate explanations of a running system from logs at different levels of abstraction, tailored to different users and their needs. We describe the mapping from logs to explanations, and present a prototype tool based on the JaCaMo platform which implements the framework.
Conference Papers
Engineering inter-agent explainability in BDI agents
Katharine
Beaumont, Elena
Yan, Samuele
Burattini, and Rem
Collier
In International Workshop on Explainable, Trustworthy, and Responsible AI and Multi-Agent Systems, 2025
Despite inter-agent explainability being recognised as a potential enabler of useful dynamics for communication and cooperation in belief-desire-intention (BDI) multi-agent systems, research on explainability has been mostly focused on targeting humans. In this paper, we survey the existing literature on BDI explainability and identify how existing strategies align with the problem of engineering BDI agents that can exchange explanations with each other. We then discuss a roadmap towards technically implementing such inter-agent explainability mechanisms in BDI multi-agent systems. Finally, we discuss potential applications that inter-agent explainability could enhance or support.
Towards an Ontology for Uniform Representations of Agent State for Heterogeneous Inter-Agent Explanations
Katharine
Beaumont, Elena
Yan, and Rem
Collier
In Second International Workshop on Hypermedia Multi-Agent Systems (HyperAgents 2025)-ECAI 2025, 2025
Inter-agent explanations are an emerging approach to agent communication that enable agents to share their cognitive processes in order to reach mutual understanding. A key challenge is that agents are often heterogeneous, built on different paradigms and architectures, which makes their internal representations difficult to exchange directly. The Semantic Web offers key technologies, in the form of ontologies, that can facilitate interoperability and shared understanding between hypermedia agents operating on the Web. We present work towards providing Agent Abstraction ontologies that would allow hypermedia agents to abstract and exchange information about their cognitive processes as part of inter-agent explanations.
Towards a Multi-Level Explainability Framework for Engineering and
Understanding BDI Agent Systems
Elena
Yan, Samuele
Burattini, Jomi Fred
Hübner, and Alessandro
Ricci
In Proceedings of the 24th Workshop "From Objects to Agents", Roma, Italy,
November 6-8, 2023, 2023
Explainability is more and more considered a crucial property to be featured by AI-based systems, including those engineered in terms of agents and multi-agent systems. This property is primarily important at the user level, to increase e.g., system trustworthiness, but can play an important role also at the engineering level, to support activities such as debugging and validation. In this paper, we focus on BDI agent systems and introduce a multi-level explainability framework to understand the system’s behaviour that targets different classes of users: developers who implement the system, software designers who verify the soundness and final users. A prototype implementation of a tool based on the JaCaMo platform for multi-agent systems is adopted to explore the idea in practice.
An Agent-Centric Perspective on Norm Enforcement and Sanctions
Elena
Yan, Luis G.
Nardin, Jomi F.
Hübner, and Olivier
Boissier
In Coordination, Organizations, Institutions, Norms, and Ethics for Governance of Multi-Agent Systems XVII, 2025
doi: 10.1007/978-3-031-82039-7_6
In increasingly autonomous and highly distributed multi-agent systems, centralized coordination becomes impractical and raises the need for governance and enforcement mechanisms from an agent-centric perspective. In our conceptual view, sanctioning norm enforcement is part of this agent-centric perspective and they aim at promoting norm compliance while preserving agents’ autonomy. The few works dealing with sanctioning norm enforcement and sanctions from the agent-centric perspective present limitations regarding the representation of sanctions and the comprehensiveness of their norm enforcement process. To address these drawbacks, we propose the NPL(s), an extension of the NPL normative programming language enriched with the representation of norms and sanctions as first-class abstractions. We also propose a BDI normative agent architecture embedding an engine for processing the NPL(s) language and a set of capabilities for approaching more comprehensively the sanctioning norm enforcement process. We apply our contributions in a case study for improving the robustness of agents’ decision-making in a production automation system.
Perspectives on Regulation Adaptation in Multi-Agent Systems: from Agent to Organization Centric and Beyond
Elena
Yan, Luis
Nardin, Jomi
Hübner, Olivier
Boissier, and Jaime
Sichman
In Anais do XIX Workshop-Escola de Sistemas de Agentes, seus Ambientes e Aplicações, Fortaleza/CE, 2025
In Multi-Agent Systems (MAS), the regulation of agents aims to define a balance between the control of the system and the agents’ autonomy. The ability of a MAS to adapt its regulations at run-time is an important feature that enables it to be flexible to changing situations. There is no unique approach to designing such ability. In this paper, we discuss the different options along the multi-agent oriented programming dimensions, i.e., agent, environment, interaction, and organization. We show that regulation adaptation can be managed within a single dimension or distributed in multiple dimensions. We use a case study in the manufacturing system domain to motivate the regulation adaptation in each of these dimensions.
A Regulation Adaptation Model for Multi-Agent Systems
Elena
Yan, Luis
Nardin, Olivier
Boissier, and Jaime
Sichman
In 28th European Conference on Artificial Intelligence (ECAI 2025), Bologna, Italy, 2025
In multi-agent systems (MAS), agents can be governed by regulations. Due to an ever-evolving set of exogenous or endogenous changes, the ability of MAS to adapt regulations becomes crucial. In the MAS literature, there is a lack of comprehensive works defining models to adapt regulations. We propose a general regulation adaptation model for MAS that defines regulation adaptation representations (i.e., detect-fact, design-fact, and execute-fact) and regulation adaptation capabilities (i.e., detect, design, and execute). We also propose a method that uses constitutive and regulative norms together with the adaptation representations to govern the execution of the regulation adaptation capabilities. We illustrate the feasibility of our model by extending the SAI and NPL(s) normative engines to support regulation adaptation and integrating them into a normative agent architecture in the JaCaMo framework.
Self-Adaptive Regulation Mechanisms for a Trustworthy and Sustainable Industry of the Future
Elena
Yan
In ECAI 2025-Doctoral Consortium, 2025
The increasing distribution of autonomous agents incorporating Artificial Intelligence technologies to operate (e.g., perceive, decide, interact, and act) in dynamic shared environments raises the challenge of ensuring their governance without limiting their autonomy. In multi-agent systems (MAS), regulation concepts and mechanisms, such as rules, norms, and sanctions, are usually integrated into what are called normative MAS to support balancing the agents’ autonomy with the system’s objectives. In our research, we explore the integration of regulation management mechanisms into MAS, emphasizing regulation adaptation mechanisms to enable the system to adapt to evolving contextual conditions at execution time. Regulation adaptation is an underexplored capability in MAS. We define a conceptual model and design the corresponding representations and management mechanisms that enable adaptation in regulated MAS. Regulation adaptation is useful in various domains, such as manufacturing systems, where adaptation can support their resilience, trustworthiness, and sustainability.
Preprints
An Agent-Centric Perspective on Norm Enforcement and Sanctions
Elena
Yan, Luis G.
Nardin, Jomi F.
Hübner, and Olivier
Boissier
2024
doi: 10.48550/ARXIV.2403.15128
In increasingly autonomous and highly distributed multi-agent systems, centralized coordination becomes impractical and raises the need for governance and enforcement mechanisms from an agent-centric perspective. In our conceptual view, sanctioning norm enforcement is part of this agent-centric perspective and they aim at promoting norm compliance while preserving agents’ autonomy. The few works dealing with sanctioning norm enforcement and sanctions from the agent-centric perspective present limitations regarding the representation of sanctions and the comprehensiveness of their norm enforcement process. To address these drawbacks, we propose the NPL(s), an extension of the NPL normative programming language enriched with the representation of norms and sanctions as first-class abstractions. We also propose a BDI normative agent architecture embedding an engine for processing the NPL(s) language and a set of capabilities for approaching more comprehensively the sanctioning norm enforcement process. We apply our contributions in a case study for improving the robustness of agents’ decision-making in a production automation system.
A Unified View on Regulation Management in Multi-Agent Systems
Elena
Yan, Luis G.
Nardin, Olivier
Boissier, and Jaime S.
Sichman
2025
Regulating multi-agent system (MAS) to achieve a balance between the autonomy of agents and the control of the system is still a challenge. Regulation management in MAS has been conceptualized from various perspectives in the literature, whose intersections open up a wide spectrum of design options. We propose a unified view on regulation management in MAS that identifies design options with respect to three perspectives: the regulation capabilities, the multi-agent oriented programming dimensions, and the architectural style. We use our unified view for reviewing and classifying existing MAS frameworks in the literature, highlighting the dominant and underexplored views on regulation management in MAS.