👋 Hello, I am Elena Yan, currently a Ph.D. student in Computer Science at the Department of Computer Science and Intelligent Systems of the Institut Henri FAYOL at École des Mines de Saint-Étienne in France.
My thesis focuses on developing self-adaptive regulation mechanisms in multi-agent systems, supervised by Olivier Boissier, Jaime S. Sichman, and Luis G. Nardin. I obtained my master and bachelor degrees in Computer Science and Engineering at the University of Bologna in Italy, supervised by Alessandro Ricci.
My research interests center around multi-agent systems and engineering methodologies, with the aim of deploying regulated, adaptive and trustworthy systems. My current research focuses on the development of a model and mechanisms for multi-agent systems able to self-regulate and adapt regulations for a trustworthy and sustainable industry of future. Some keywords are:
- Multi-Agent Systems
- Regulation Management
- Regulation Adaptation
- Explainable Agents
Find more in my CV here: Elena Yan’s CV If you are interested in my work, please feel free to drop me an email: elena.yan@emse.fr
Selected Publications
Towards a Multi-Level Explainability Framework for Engineering and
Understanding BDI Agent Systems
Elena
Yan, Samuele
Burattini, Jomi Fred
HĂĽbner, and Alessandro
Ricci
In Proceedings of the 24th Workshop "From Objects to Agents", Roma, Italy,
November 6-8, 2023, 2023
Explainability is more and more considered a crucial property to be featured by AI-based systems, including those engineered in terms of agents and multi-agent systems. This property is primarily important at the user level, to increase e.g., system trustworthiness, but can play an important role also at the engineering level, to support activities such as debugging and validation. In this paper, we focus on BDI agent systems and introduce a multi-level explainability framework to understand the system’s behaviour that targets different classes of users: developers who implement the system, software designers who verify the soundness and final users. A prototype implementation of a tool based on the JaCaMo platform for multi-agent systems is adopted to explore the idea in practice.
A multi-level explainability framework for engineering and understanding
BDI agents
Elena
Yan, Samuele
Burattini, Jomi Fred
HĂĽbner, and Alessandro
Ricci
Autonomous Agents and Multi-Agent Systems, 2025
doi: 10.1007/S10458-025-09689-6
As the complexity of software systems rises, explainability - i.e. the ability of systems to provide explanations of their behaviour - becomes a crucial property. This is true for any AI-based systems, including autonomous systems that exhibit decision making capabilities such as multi-agent systems. Although explainability is generally considered useful to increase the level of trust for end-users, we argue it is also an interesting property for software engineers, developers, and designers to debug and validate the system’s behaviour. In this paper, we propose a multi-level explainability framework for BDI agents to generate explanations of a running system from logs at different levels of abstraction, tailored to different users and their needs. We describe the mapping from logs to explanations, and present a prototype tool based on the JaCaMo platform which implements the framework.
An Agent-Centric Perspective on Norm Enforcement and Sanctions
Elena
Yan, Luis G.
Nardin, Jomi F.
HĂĽbner, and Olivier
Boissier
In Coordination, Organizations, Institutions, Norms, and Ethics for Governance of Multi-Agent Systems XVII, 2025
doi: 10.1007/978-3-031-82039-7_6
In increasingly autonomous and highly distributed multi-agent systems, centralized coordination becomes impractical and raises the need for governance and enforcement mechanisms from an agent-centric perspective. In our conceptual view, sanctioning norm enforcement is part of this agent-centric perspective and they aim at promoting norm compliance while preserving agents’ autonomy. The few works dealing with sanctioning norm enforcement and sanctions from the agent-centric perspective present limitations regarding the representation of sanctions and the comprehensiveness of their norm enforcement process. To address these drawbacks, we propose the NPL(s), an extension of the NPL normative programming language enriched with the representation of norms and sanctions as first-class abstractions. We also propose a BDI normative agent architecture embedding an engine for processing the NPL(s) language and a set of capabilities for approaching more comprehensively the sanctioning norm enforcement process. We apply our contributions in a case study for improving the robustness of agents’ decision-making in a production automation system.
A Unified View on Regulation Management in Multi-Agent Systems
Elena
Yan, Luis G.
Nardin, Olivier
Boissier, and Jaime S.
Sichman
2025
Regulating multi-agent system (MAS) to achieve a balance between the autonomy of agents and the control of the system is still a challenge. Regulation management in MAS has been conceptualized from various perspectives in the literature, whose intersections open up a wide spectrum of design options. We propose a unified view on regulation management in MAS that identifies design options with respect to three perspectives: the regulation capabilities, the multi-agent oriented programming dimensions, and the architectural style. We use our unified view for reviewing and classifying existing MAS frameworks in the literature, highlighting the dominant and underexplored views on regulation management in MAS.
Refer to the Research page for details.
Latest News
- August 04, 2025: Our paper “Perspectives on Regulation Adaptation in Multi-Agent Systems: from Agent to Organization Centric and Beyond” is accepted at WESAAC 2025!
- August 01, 2025: Our paper “Towards an Ontology for Uniform Representations of Agent State for Heterogeneous Inter-Agent Explanations” is accepted at HyperAgents@ECAI 2025!
- July 31, 2025: My doctoral consortium application “Self-Adaptive Regulation Mechanisms for a Trustworthy and Sustainable Industry of the Future” is accepted at ECAI 2025!
- July 28 - October 17, 2025: Visiting research period at the University of SĂŁo Paulo, Brazil, under the supervision of Jaime S. Sichman.
- July 21-25, 2025: Organization of Summer School on AI Technologies for Trust, Interoperability, Autonomy and Resilience in Industry 4.0
- July 16, 2025: Our paper “Agent Toolkits: Towards Explainable Agency in ASTRA” is accepted at EUMAS Agent Toolkits 2025!
- July 11, 2025: Our paper “A Regulation Adaptation Model for Multi-Agent Systems” is accepted at ECAI 2025!
Refer to the Activities page for details.
Elena Yan
Departement of Computer Science and Intelligent Systems
Henri Fayol Institute
École des Mines de Saint-Étienne
158 cours Fauriel - CS 62362
42023 Saint-Étienne Cedex 2 - France
Email: elena.yan@emse.fr