👋 Hello, I am Elena Yan, currently a Ph.D. student in Computer Science at the Department of Computer Science and Intelligent Systems of the Institut Henri FAYOL at École des Mines de Saint-Étienne in France.
My thesis focuses on developing self-adaptive regulation mechanisms in multi-agent systems, supervised by Olivier Boissier, Jaime S. Sichman, and Luis G. Nardin.
I obtained my master and bachelor degrees in Computer Science and Engineering at the University of Bologna in Italy, supervised by Alessandro Ricci.
My research interests center around multi-agent systems and engineering methodologies, with the aim of deploying regulated, adaptive and trustworthy systems. My current research focuses on the development of a model and mechanisms for multi-agent systems able to self-regulate and adapt regulations for a trustworthy and sustainable industry of future. Some keywords are:
- Normative Multi-Agent Systems
- Self-Adaptation
- Software Engineering
- BDI Agents
- Responsible AI
- Industry of the Future
You can find my CV here: Elena Yan’s CV. If you are interested in my work, please feel free to drop me an email: elena.yan@emse.fr
Selected Publications
Towards a Multi-Level Explainability Framework for Engineering and
Understanding BDI Agent Systems
Elena
Yan, Samuele
Burattini, Jomi Fred
HĂĽbner, and Alessandro
Ricci
In Proceedings of the 24th Workshop "From Objects to Agents", Roma, Italy,
November 6-8, 2023, 2023
Explainability is more and more considered a crucial property to be featured by AI-based systems, including those engineered in terms of agents and multi-agent systems. This property is primarily important at the user level, to increase e.g., system trustworthiness, but can play an important role also at the engineering level, to support activities such as debugging and validation. In this paper, we focus on BDI agent systems and introduce a multi-level explainability framework to understand the system’s behaviour that targets different classes of users: developers who implement the system, software designers who verify the soundness and final users. A prototype implementation of a tool based on the JaCaMo platform for multi-agent systems is adopted to explore the idea in practice.
A multi-level explainability framework for engineering and understanding
BDI agents
Elena
Yan, Samuele
Burattini, Jomi Fred
HĂĽbner, and Alessandro
Ricci
Autonomous Agents and Multi-Agent Systems, 2025
doi: 10.1007/S10458-025-09689-6
As the complexity of software systems rises, explainability - i.e. the ability of systems to provide explanations of their behaviour - becomes a crucial property. This is true for any AI-based systems, including autonomous systems that exhibit decision making capabilities such as multi-agent systems. Although explainability is generally considered useful to increase the level of trust for end-users, we argue it is also an interesting property for software engineers, developers, and designers to debug and validate the system’s behaviour. In this paper, we propose a multi-level explainability framework for BDI agents to generate explanations of a running system from logs at different levels of abstraction, tailored to different users and their needs. We describe the mapping from logs to explanations, and present a prototype tool based on the JaCaMo platform which implements the framework.
An Agent-Centric Perspective on Norm Enforcement and Sanctions
Elena
Yan, Luis G.
Nardin, Jomi F.
HĂĽbner, and Olivier
Boissier
In Coordination, Organizations, Institutions, Norms, and Ethics for Governance of Multi-Agent Systems XVII, 2025
doi: 10.1007/978-3-031-82039-7_6
In increasingly autonomous and highly distributed multi-agent systems, centralized coordination becomes impractical and raises the need for governance and enforcement mechanisms from an agent-centric perspective. In our conceptual view, sanctioning norm enforcement is part of this agent-centric perspective and they aim at promoting norm compliance while preserving agents’ autonomy. The few works dealing with sanctioning norm enforcement and sanctions from the agent-centric perspective present limitations regarding the representation of sanctions and the comprehensiveness of their norm enforcement process. To address these drawbacks, we propose the NPL(s), an extension of the NPL normative programming language enriched with the representation of norms and sanctions as first-class abstractions. We also propose a BDI normative agent architecture embedding an engine for processing the NPL(s) language and a set of capabilities for approaching more comprehensively the sanctioning norm enforcement process. We apply our contributions in a case study for improving the robustness of agents’ decision-making in a production automation system.
Refer to the Research page for details.
Latest News
- March 21, 2025: Our paper “A Unified View on Regulation Management in Multi-Agent Systems” is accepted at COINE@AAMAS2025!
- March 4, 2025: Our post-proceeding of the COINE@AAMAS conference paper “An Agent-Centric Perspective on Norm Enforcement and Sanctions” is published! Check it out at: https://doi.org/10.1007/978-3-031-82039-7_6
- January 30, 2025: Our journal paper “A Multi-Level Explainability Framework for Engineering and Understanding BDI Agents” is published in JAAMAS! Check it out at: https://doi.org/10.1007/s10458-025-09689-6
- October 17, 2024: Talk: “A Normative Agent-Centric Approach to Regulate Manufacturing Process” at SeReCo Autumn Workshop 2024, Karlsruhe, Germany. [slides]
- August 19-29, 2024: Participation at EASSS Summer School, EUMAS Conference, and DKG Workshop, Dublin Ireland.
Refer to the Activities page for details.
Elena Yan
Departement of Computer Science and Intelligent Systems
Henri Fayol Institute
École des Mines de Saint-Étienne
158 cours Fauriel - CS 62362
42023 Saint-Étienne Cedex 2 - France
Email: elena.yan@emse.fr