[HTML per year] [HTML per type] [HTML per topic] [HTML per authors] [PDF] [RTF] [BibTeX] [XML]

(2024-02-10)



Andreas Kalaitzakis, Jérôme Euzenat, À quoi sert la spécialisation en évolution culturelle de la connaissance?, in: Maxime Morge (éd), Actes 31e journées francophones sur Systèmes multi-agent (JFSMA), Strasbourg (FR), pp76-85, 2023
BibTeX
https://moex.inria.fr/files/papers/kalaitzakis2023a.pdf
https://pfia23.icube.unistra.fr/downloads/JFSMA/papers/JFSMA_2023_paper_23.pdf
Des agents peuvent faire évoluer leurs ontologies en accomplissant conjointement une tâche. Nous considérons un ensemble de tâches dont chaque agent ne considère qu'une partie. Nous supposons que moins un agent considère de tâches, plus la précision de sa meilleure tâche sera élevée. Pour le vérifier, nous simulons différentes populations considérant un nombre de tâches croissant. De manière contre-intuitive, l'hypothèse n'est pas vérifiée. D'une part, lorsque les agents ont une mémoire illimitée, plus un agent considère de tâches, plus il est précis. D'autre part, lorsque les agents ont une mémoire limitée, les objectifs de maximiser la précision de leur meilleures tâches et de s'accorder entre eux sont mutuellement exclusifs. Lorsque les sociétés favorisent la spécialisation, les agents n'améliorent pas leur précision. Cependant, ces agents décideront plus souvent en fonction de leurs meilleures tâches, améliorant ainsi la performance de leur société.
Experiments: [20230110-MTOA] [20230120-MTOA]
Evolution culturelle de la connaissance, Simulation multi-agents, Spécialisation des agents
Multi-agent systems/Systèmes multi-agents, Semantic web/Web sémantique, Cultural knowledge evolution/Évolution culturelle de la connaissance


Andreas Kalaitzakis, Jérôme Euzenat, Multi-tasking resource-constrained agents reach higher accuracy when tasks overlap, in: Proc. 20th European conference on multi-agents systems (EUMAS), Napoli (IT), ( Vadim Malvone, Aniello Murano (eds), Proc. 20th European conference on multi-agents systems (EUMAS), Lecture notes in computer science 14282, 2023), pp425-434, 2023
[DOI: 10.1007/978-3-031-43264-4_28]BibTeX
https://moex.inria.fr/files/papers/kalaitzakis2023b.pdf
Agents have been previously shown to evolve their ontologies while interacting over a single task. However, little is known about how interacting over several tasks affects the accuracy of agent ontologies. Is knowledge learned by tackling one task beneficial for another task? We hypothesize that multi-tasking agents tackling tasks that rely on the same properties, are more accurate than multi-tasking agents tackling tasks that rely on different properties. We test this hypothesis by varying two parameters. The first parameter is the number of tasks assigned to the agents. The second parameter is the number of common properties among these tasks. Results show that when deciding for different tasks relies on the same properties, multi-tasking agents reach higher accuracy. This suggests that when agents tackle several tasks, it is possible to transfer knowledge from one task to another.
Experiments: [20230505-MTOA]
Cultural knowledge evolution, Knowledge transfer, Multi-tasking
Multi-agent systems/Systèmes multi-agents, Cultural knowledge evolution/Évolution culturelle de la connaissance


Adriana Luntraru, Value-sensitive knowledge evolution, Master's thesis, Université Grenoble Alpes, Grenoble (FR), 2023
BibTeX
https://moex.inria.fr/files/reports/m2r-luntraru.pdf
Cultural values are cognitive representations of general objectives, such as independence or mastery, that people use to distinguish whether something is "good" or "bad". More specifically, people may use their values to evaluate alternatives and pick the most compatible one. Cultural values have been previously used in artificial societies of agents with the purpose of replicating and predicting human behavior. However, to the best of our knowledge, they have never been used in the context of cultural knowledge evolution. We consider cooperating agents which adapt their individually learned ontologies by interacting with each other to agree. When two agents disagree during an interaction, one of them needs to adapt its ontology. We use the cultural values of independence, novelty, authority and mastery to influence the choice of which agent adapts in a population of agents sharing the same values. We investigate the effects the choice of cultural values has on the knowledge obtained. Our results show that agents do not improve the accuracy of their knowledge without using the mastery value. Under certain conditions, independence causes the agents to converge to successful interactions faster, and novelty increases knowledge diversity, but both effects come with a large reduction in accuracy. We however did not find any significant effects of authority.
Experiments: [20230523-VBCE]
Multi-agent systems/Systèmes multi-agents, Semantic web/Web sémantique, Cultural knowledge evolution/Évolution culturelle de la connaissance


Anaïs Siebers, Intrinsic exploration-motivation in cultural knowledge evolution, Master's thesis, Ruhr Universität, Bochum (DE), 2023
BibTeX
https://moex.inria.fr/files/reports/msc-siebers.pdf
In cultural knowledge evolution simulated by multi-agent simulations, agents can improve the accuracy of their knowledge by interacting with other agents and adapting their knowledge with the aim of agreeing. But their knowledge might be confined to specific areas because they do not have the capacity to explore the world around them. Since intrinsic motivation to explore in artificial agents has already proven to increase exploration, it was researched whether and how agents in simulations of cultural knowledge evolution can be motivated to explore. Moreover, it was tested how far this improves and changes their knowledge. Three different kinds of motivation were investigated: curiosity, creativity and non-exploration. Moreover, intrinsic motivation was modelled with and without reinforcement learning. Agents either explored on their own or picked specific interaction partner(s). It has been shown that it is possible to model agents with intrinsic motivation to explore in cultural knowledge evolution, and that this has a significant effect on the agents’ knowledge. Contrary to the expectations and other studies, this did not lead to an increase in knowledge completeness. Out of all intrinsic motivations, curiosity had the highest accuracy and completeness. Models with reinforcement learning performed similar to direct models. As expected, intrinsic motivation led to faster convergence of the agents’ knowledge, especially with social agents. Heterogeneously motivated agents only had a higher accuracy and completeness than homogeneously motivated agents in specific cases. This thesis can be regarded as a foundation for further investigation into the role of intrinsic motivation in cultural knowledge evolution. Different forms of intrinsic motivation or different reinforcement learning techniques could be tested. Additionally, intrinsic motivation at different stages of the experiment or in different ratios, for example curious agents and agents with no motivation, could be investigated in more detail. Lastly, agents could teach other agents things they explored a lot.
Experiments: [20230822-IKEM]
Cultural knowledge evolution, Intrinsic motivation, Exploration, Artificial curiosity, Computational creativity, Multi-agent simulation
Multi-agent systems/Systèmes multi-agents, Cultural knowledge evolution/Évolution culturelle de la connaissance


Yasser Bourahla, Manuel Atencia, Jérôme Euzenat, Knowledge transmission and improvement across generations do not need strong selection, in: Piotr Faliszewski, Viviana Mascardi, Catherine Pelachaud, Matthew Taylor (eds), Proc. 21st ACM international conference on Autonomous Agents and Multi-Agent Systems (AAMAS), (Online), pp163-171, 2022
BibTeX [presentation]
https://www.ifaamas.org/Proceedings/aamas2022/pdfs/p163.pdf
https://moex.inria.fr/files/papers/bourahla2022a.pdf
Agents have been used for simulating cultural evolution and cultural evolution can be used as a model for artificial agents. Previous results have shown that horizontal, or intra-generation, knowledge transmission allows agents to improve the quality of their knowledge to a certain level. Moreover, variation generated through vertical, or inter-generation, transmission allows agents to exceed that level. Such results were obtained under specific conditions such as the drastic selection of agents allowed to transmit their knowledge, seeding the process with correct knowledge or introducing artificial noise during transmission. Here, we question the necessity of such measures and study their impact on the quality of transmitted knowledge. For that purpose, we combine the settings of two previous experiments and relax these conditions (no strong selection of teachers, no fully correct seed, no introduction of artificial noise). The rationale is that if interactions lead agents to improve their overall knowledge quality, this should be sufficient to ensure correct knowledge transmission, and that transmission mechanisms are sufficiently imperfect to produce variation. In this setting, we confirm that vertical transmission improves on horizontal transmission even without drastic selection and oriented learning. We also show that horizontal transmission is able to compensate for the lack of parent selection if it is maintained for long enough. This means that it is not necessary to take the most successful agents as teachers, neither in vertical nor horizontal transmission, to cumulatively improve knowledge.
Experiments: [20210601-DOTG] [20210927-DOTG]
Ontology, Multi-agent social simulation, Multi-agent learning, Knowledge diversity
Multi-agent systems/Systèmes multi-agents, Semantic web/Web sémantique, Cultural knowledge evolution/Évolution culturelle de la connaissance, Ontology matching/Alignement d'ontologies


Yasser Bourahla, Manuel Atencia, Jérôme Euzenat, Transmission de connaissances et sélection, in: Valérie Camps (éd), Actes 30e journées francophones sur Systèmes multi-agent (JFSMA), Saint-Étienne (FR), pp63-72, 2022
BibTeX
https://moex.inria.fr/files/papers/bourahla2022b.pdf
https://moex.inria.fr/files/papers/bourahla2022b-cnia2022.pdf
Les agents peuvent être utilisés pour simuler l'évolution culturelle et l'évolution culturelle peut être utilisée comme modèle pour les agents artificiels. Des expériences ont montré que la transmission intragénérationnelle des connaissances permet aux agents d'en améliorer la qualité. De plus, sa transmission intergénérationnelle permet de dépasser ce niveau. Ces résultats ont été obtenus dans des conditions particulières: sélection drastique des agents transmetant leurs connaissances, initialisation avec des connaissances correctes ou introduction de bruit lors de la transmission. Afin d'étudier l'impact de ces mesures sur la qualité de la connaissance transmise, nous combinons les paramètres de deux expériences précédentes et relâchons ces conditions. Ce dispositif confirme que la transmission verticale permet d'améliorer la qualité de la connaissance obtenue par transmission horizontale même sans sélection drastique et apprentissage orienté. Il montre également qu'une transmission intragénérationnelle suffisante peut compenser l'absence de sélection parentale.
Experiments: [20210601-DOTG] [20210927-DOTG]
Simulation sociale multi-agents, Évolution culturelle, Transmission des connaissances, Génération d'agents, Évolution culturelle des connaissances
Multi-agent systems/Systèmes multi-agents, Cultural knowledge evolution/Évolution culturelle de la connaissance


Yasser Bourahla, Jérôme David, Jérôme Euzenat, Meryem Naciri, Measuring and controlling knowledge diversity, in: Tiago Prince Sales, Maria Hedblom, He Tan, Lucía Gómez Álvarez, Rafael Peñaloza, Srdjan Vesic (eds), Proc. 1st JOWO workshop on formal models of knowledge diversity (FMKD), Jönköping (SE), 2022
BibTeX
http://ceur-ws.org/Vol-3249/paper1-FMKD.pdf
https://moex.inria.fr/files/papers/bourahla2022c.pdf
Assessing knowledge diversity may be useful for many purposes. In particular, it is necessary to measure diversity in order to understand how it arises or is preserved; it is also necessary to control it in order to measure its effects. Here we consider measuring knowledge diversity using two components: (a) a diversity measure taking advantage of (b) a knowledge difference measure. We present the general principles and various candidates for such components. We discuss how these measures may be used to generate populations of agents with controlled levels of knowledge diversity.
Supplementary material: [notebook]
Knowledge diversity, Diversity measure, Ontology dissimilarity, Diversity control, Entropy
Cultural knowledge evolution/Évolution culturelle de la connaissance, Ontology distances/Distances entre ontologies


Yasser Bourahla, Manuel Atencia, Jérôme Euzenat, Knowledge improvement and diversity under interaction-driven adaptation of learned ontologies, in: Ulle Endriss, Ann Nowé, Frank Dignum, Alessio Lomuscio (eds), Proc. 20th ACM international conference on Autonomous Agents and Multi-Agent Systems (AAMAS), London (UK), pp242-250, 2021
BibTeX [presentation]
http://www.ifaamas.org/Proceedings/aamas2021/pdfs/p242.pdf
https://moex.inria.fr/files/papers/bourahla2021a.pdf
When agents independently learn knowledge, such as ontologies, about their environment, it may be diverse, incorrect or incomplete. This knowledge heterogeneity could lead agents to disagree, thus hindering their cooperation. Existing approaches usually deal with this interaction problem by relating ontologies, without modifying them, or, on the contrary, by focusing on building common knowledge. Here, we consider agents adapting ontologies learned from the environment in order to agree with each other when cooperating. In this scenario, fundamental questions arise: Do they achieve successful interaction? Can this process improve knowledge correctness? Do all agents end up with the same ontology? To answer these questions, we design a two-stage experiment. First, agents learn to take decisions about the environment by classifying objects and the learned classifiers are turned into ontologies. In the second stage, agents interact with each other to agree on the decisions to take and modify their ontologies accordingly. We show that agents indeed reduce interaction failure, most of the time they improve the accuracy of their knowledge about the environment, and they do not necessarily opt for the same ontology.
Experiments: [20201001-DOLA] [20200623-DOLA]
Ontology, Multi-agent social simulation, Multi-agent learning, Knowledge diversity
Multi-agent systems/Systèmes multi-agents, Semantic web/Web sémantique, Cultural knowledge evolution/Évolution culturelle de la connaissance, Ontology matching/Alignement d'ontologies


Jérôme Euzenat, Interaction-based ontology alignment repair with expansion and relaxation, in: Proc. 26th International Joint Conference on Artificial Intelligence (IJCAI), Melbourne (VIC AU), pp185-191, 2017
[DOI: 10.24963/ijcai.2017/27]BibTeX
http://static.ijcai.org/proceedings-2017/0027.pdf
https://moex.inria.fr/files/papers/euzenat2017a.pdf
Agents may use ontology alignments to communicate when they represent knowledge with different ontologies: alignments help reclassifying objects from one ontology to the other. These alignments may not be perfectly correct, yet agents have to proceed. They can take advantage of their experience in order to evolve alignments: upon communication failure, they will adapt the alignments to avoid reproducing the same mistake. Such repair experiments had been performed in the framework of networks of ontologies related by alignments. They revealed that, by playing simple interaction games, agents can effectively repair random networks of ontologies. Here we repeat these experiments and, using new measures, show that previous results were underestimated. We introduce new adaptation operators that improve those previously considered. We also allow agents to go beyond the initial operators in two ways: they can generate new correspondences when they discard incorrect ones, and they can provide less precise answers. The combination of these modalities satisfy the following properties: (1) Agents still converge to a state in which no mistake occurs. (2) They achieve results far closer to the correct alignments than previously found. (3) They reach again 100% precision and coherent alignments.
Erratum: The results reported in this paper for operators addjoin and refadd are not accurate, due to a software error. The results reported were worse than they should have been. Updated results can be found in [20180308-NOOR], [20180311-NOOR] and [20180529-NOOR].
Experiments: [20170214a-NOOR] [20170214b-NOOR] [20170215a-NOOR] [20170215b-NOOR] [20170216-NOOR]
Multi-agent systems/Systèmes multi-agents, Cultural knowledge evolution/Évolution culturelle de la connaissance, Ontology matching/Alignement d'ontologies, Revision/Révision


Jérôme Euzenat, Crafting ontology alignments from scratch through agent communication, in: Proc. 20th International Conference on Principles and practice of multi-agent systems (PRIMA), Nice (FR), ( Bo An, Ana Bazzan, João Leite, Serena Villata, Leendert van der Torre (eds), Proc. 20th International Conference on Principles and practice of multi-agent systems (PRIMA), Lecture notes in computer science 10621, 2017), pp245-262, 2017
[DOI: 10.1007/978-3-319-69131-2_15]BibTeX
https://moex.inria.fr/files/papers/euzenat2017b.pdf
Agents may use different ontologies for representing knowledge and take advantage of alignments between ontologies in order to communicate. Such alignments may be provided by dedicated algorithms, but their accuracy is far from satisfying. We already explored operators allowing agents to repair such alignments while using them for communicating. The question remained of the capability of agents to craft alignments from scratch in the same way. Here we explore the use of expanding repair operators for that purpose. When starting from empty alignments, agents fails to create them as they have nothing to repair. Hence, we introduce the capability for agents to risk adding new correspondences when no existing one is useful. We compare and discuss the results provided by this modality and show that, due to this generative capability, agents reach better results than without it in terms of the accuracy of their alignments. When starting with empty alignments, alignments reach the same quality level as when starting with random alignments, thus providing a reliable way for agents to build alignment from scratch through communication.
Erratum: The results of [20170531-NOOR] do not allow to conclude.
Experiments: [20170529-NOOR] [20170530-NOOR] [20170531-NOOR] [20170706-NOOR]
Ontology alignment, Alignment repair, Cultural knowkedge evolution, Agent simulation, Coherence, Network of ontologies
Multi-agent systems/Systèmes multi-agents, Cultural knowledge evolution/Évolution culturelle de la connaissance, Ontology matching/Alignement d'ontologies, Revision/Révision


Jérôme Euzenat, Knowledge diversity under socio-environmental pressure, in: Michael Rovatsos (ed), Investigating diversity in AI: the ESSENCE project, 2013-2017, Deliverable, ESSENCE, 62p., 2017, pp28-30
BibTeX
https://moex.inria.fr/files/papers/euzenat2017c.pdf
Experimental cultural evolution has been convincingly applied to the evolution of natural language and we aim at applying it to knowledge. Indeed, knowledge can be thought of as a shared artefact among a population influenced through communication with others. It can be seen as resulting from contradictory forces: internal consistency, i.e., pressure exerted by logical constraints, against environmental and social pressure, i.e., the pressure exerted by the world and the society agents live in. However, adapting to environmental and social pressure may lead agents to adopt the same knowledge. From an ecological perspective, this is not particularly appealing: species can resist changes in their environment because of the diversity of the solutions that they can offer. This problem may be approached by involving diversity as an internal constraint resisting external pressure towards uniformity.
Multi-agent systems/Systèmes multi-agents, Cultural knowledge evolution/Évolution culturelle de la connaissance, Ontology matching/Alignement d'ontologies, Semantic web/Web sémantique

© INRIA, 1999-2001,2003, 2005-2007