Forgetting/Oubli in areas (2020-10-31)
Line van den Berg, Manuel Atencia, Jérôme Euzenat, Unawareness in multi-agent systems with partial valuations, in: Proc. 10th AAMAS workshop on Logical Aspects of Multi-Agent Systems (LAMAS), Auckland (NZ), 2020
Public signature awareness is satisfied if agents are aware of the vocabulary, propositions, used by other agents to think and talk about the world. However, assuming that agents are fully aware of each other's signatures prevents them to adapt their vocabularies to newly gained information, from the environment or learned through agent communication. Therefore this is not realistic for open multi-agent systems. We propose a novel way to model awareness with partial valuations that drops public signature awareness and can model agent signature unawareness, and we give a first view on defining the dynamics of raising and forgetting awareness on this framework.
Awareness, Dynamic Epistemic Logic, Partial valuations, Multi-agent systems
Jérôme Euzenat, Libero Maesano, An architecture for selective forgetting, in: Proc. 8th SSAISB conference on Artificial Intelligence and Simulation of Behavior (AISB), Leeds (UK), pp117-128, 1991
Some knowledge based systems will have to deal with increasing amount of knowledge. In order to avoid memory overflow, it is necessary to clean memory of useless data. Here is a first step toward an intelligent automatic forgetting scheme. The problem of the close relation between forgetting and inferring is addressed, and a general solution is proposed. It is implemented as invalidation operators for reasoning maintenance system dependency graphs. This results in a general architecture for selective forgetting which is presented in the framework of the Sachem system.
Jérôme Euzenat, Martin Strecker, Forgetting abilities for space-bounded agents, Internal report, Laboratoire ARTEMIS, Grenoble (FR), 11p., August 1991
We propose a model of "agent" that has some characteristics at the crossroad of several ongoing research tracks: self rationality, autoepistemic reasoning, cooperative agents and resource-bounded reasoning. That model is particular since available technologies enable its implementation and thus its experimentation. Although in distributed artificial intelligence, the emphasis is on cooperation, we concentrate on belief management. We stress here the resource-bounded reasoning aspect of the work but describe first the architecture of our agents. We then describe the kind of behavior we expect from forgetting and show that this is achievable in both the theoretical and practical frameworks.
Resource-bounded reasoning, Belief revision, Autonomous agents