Jérôme Euzenat bibliography sorted by types (2024-07-07)
Pavel Shvaiko, Jérôme Euzenat, Guest editorial preface of the special issue on Ontology matching, International journal of semantic web and information systems 3(2):i-iii, 2007
Michelle Cheatham, Isabel Cruz, Jérôme Euzenat, Catia Pesquita, Special issue on ontology and linked data matching, Semantic web journal 8(2):183-184, 2017
Pieter Pauwels, María Poveda Villalón, Alvaro Sicilia, Jérôme Euzenat, Semantic technologies and interoperability in the built environment, Semantic web journal 9(6):731-734, 2018
The built environment consists of plenty of physical assets with which we interact on a daily basis. In order to improve not only our built environment, but also our interaction with that environment, we would benefit a lot from semantic representations of this environment. This not only includes buildings, but also large infrastructure (bridges, tunnels, waterways, underground systems), and geospatial data. With this special issue, an insight is given into the current state of the art in terms of semantic technologies and interoperability in this built environment. This editorial not only summarizes the content of the Special Issue on Semantic Technologies and interoperability in the Built Environment, it also provides a brief overview of the current state of the art in general in terms of standardisation and community efforts.
Isabelle Bloch, Jérôme Euzenat, Jérôme Lang, François Schwarzentruber, Introduction, Revue ouverte d'intelligence artificielle 3(3-4):193-199, 2022
Journal articles/Articles de revues
Jérôme Euzenat, Représentation granulaire du temps, Revue d'intelligence artificielle 7(3):329-361, 1993
Afin de représenter le temps sous plusieurs niveaux de détail, une représentation temporelle granulaire est proposée. Une telle représentation dispose les entités temporelles dans différents espaces organisés hiérarchiquement et nommés granularités. Elle conduit à conserver la représentation symbolique du temps et à simplifier la représentation numérique. Par contre, elle nécessite la définition d'opérateurs de conversion des représentations entre deux granularités afin de pouvoir utiliser une même entité temporelle sous différentes granularités. Les propriétés que doivent respecter ces opérateurs afin de conserver les interprétations classiques de ces représentations sont exposées et des opérateurs de conversion symboliques et numériques sont proposés. Sous l'aspect symbolique, les opérateurs sont compatibles avec la représentation des relations temporelles sous forme d'algèbres de points et d'intervalles. En ce qui concerne la conversion numérique, certaines contraintes doivent être ajoutées afin de disposer des propriétés escomptées. Par ailleurs, la caractérisation des opérateurs laisse une certaine latitude, qui peut être utilisée pour introduire de la connaissance liée au domaine considéré, dans leur définition. Des possibilités d'utilisation de cette latitude sont discutées.
The publisher overlooked one page of the paper in the editing process (the equivalent of one full page starting three lines before the end of p334). This page was published in issue 7(4).
Représentation temporelle, Granularité, Localité, Histoire
Jérôme Euzenat, La représentation de connaissance est-elle soluble dans le web ?, Document numérique 3(3-4):151-167, 1999
Une double interrogation se pose concernant les rapports entre la représentation de connaissance, telle qu'elle est entendue en intelligence artificielle (c'est-à-dire une représentation formelle dotée d'une sémantique), et la notion de document telle qu'elle est actuellement comprise dans le World wide web :
- La représentation de connaissance est-elle soluble dans le web ? C'est-à-dire peut-elle s'intégrer harmonieusement dans le paysage du web et comment, mais aussi que peut-elle apporter au web ?
- La représentation de connaissance va-t-elle se dissoudre dans le web ? En ces temps où toute source documentaire est nommée " base de connaissance ", où les formats des documents du web sont de plus en plus structurés, la représentation de connaissance a-t-elle un avenir hors du web ou sera-t-elle dépassée par ces approches plus pragmatiques ?
Pour cela, les activités de représentation de connaissance intégrées dans l'aspect documentaire du web (excluant les robots par exemple) sont décrites : pages web à connaissance ajoutée (par exemple, SHOE), serveurs de connaissance (par exemple, Troeps), moulins à connaissance (par exemple, AltaVista refine), éditeurs de connaissance (par exemple, Ontolingua server). Les rapports entre les systèmes de représentation de connaissance et le langage XML seront évoqués. S'il ne s'agit pas d'un langage de représentation de connaissance, les efforts à réaliser (et réalisés) pour l'en rapprocher sont précisés.
Représentation de connaissance, WWW, XML, ACL, serveurs de connaissance, moulins à connaissance, éditeurs de connaissance
Catherine Sanchez, Corinne Lachaize, Florence Janody, Bernard Bellon, Laurence Röder, Jérôme Euzenat, François Rechenmann, Bernard Jacq, Grasping at molecular interactions and genetic networks in Drosophila melanogaster using FlyNets, an Internet database, Nucleic acids research 27(1):89-94, 1999
FlyNets (http://gifts.univ-mrs.fr/FlyNets/FlyNets_home_page.html) is a WWW database describing molecular interactions (protein-DNA, protein-RNA and protein-protein) in the fly Drosophila melanogaster. It is composed of two parts, as follows. (i) FlyNets-base is a specialized database which focuses on molecular interactions involved in Drosophila development. The information content of FlyNets-base is distributed among several specific lines arranged according to a GenBank-like format and grouped into five thematic zones to improve human readability. The FlyNets database achieves a high level of integration with other databases such as FlyBase, EMBL, GenBank and SWISS-PROT through numerous hyperlinks. (ii) FlyNets-list is a very simple and more general databank, the long-term goal of which is to report on any published molecular interaction occuring in the fly, giving direct web access to corresponding abstracts in Medline and in FlyBase. In the context of genome projects, databases describing molecular interactions and genetic networks will provide a link at the functional level between the genome, the proteome and the transcriptome worlds of different organisms. Interaction databases therefore aim at describing the contents, structure, function and behaviour of what we herein define as the interactome world.
Amedeo Napoli, Jérôme Euzenat, Roland Ducournau, Les représentations de connaissances par objets, Techniques et science informatique 19(1-3):387-394, 2000
La finalité des systèmes de représentation des connaissances par objets est de représenter des connaissances autour de la notion centrale d'objet. Cet article décrit l'origine et l'évolution de ces systèmes, ainsi que la place et l'avenir qui leurs sont réservés.
Représentation des connaissances par objets, raisonnement, système classificatoire, logique de descriptions, gestion des connaissances, objet, inférence, classification
Farid Cerbah, Jérôme Euzenat, Traceability between models and texts through terminology, Data and knowledge engineering 38(1):31-43, 2001
Modeling often concerns the translation of informal texts into representations. This translation process requires support for itself and for its traceability. We pretend that inserting a terminology between informal textual documents and their formalization can help to serve both of these goals. Modern terminology extraction tools support the formalization process by using terms as a first sketch of formalized concepts. Moreover, the terms can be employed for linking the concepts and the textual sources. They act as a powerful navigation structure. This is exemplified through the presentation of a fully implemented system.
Terminology extraction, Traceability, Model generation, Hypertext, Object-oriented modeling, Natural language
Jérôme Euzenat, Granularity in relational formalisms with application to time and space representation, Computational intelligence 17(4):703-737, 2001
Temporal and spatial phenomena can be seen at a more or less precise granularity, depending on the kind of perceivable details. As a consequence, the relationship between two objects may differ depending on the granularity considered. When merging representations of different granularity, this may raise problems. This paper presents general rules of granularity conversion in relation algebras. Granularity is considered independently of the specific relation algebra, by investigating operators for converting a representation from one granularity to another and presenting six constraints that they must satisfy. The constraints are shown to be independent and consistent and general results about the existence of such operators are provided. The constraints are used to generate the unique pairs of operators for converting qualitative temporal relationships (upward and downward) from one granularity to another. Then two fundamental constructors (product and weakening) are presented: they permit the generation of new qualitative systems (e.g. space algebra) from existing ones. They are shown to preserve most of the properties of granularity conversion operators.
Granularity, Space representation, Time representation, Relation algebra, Interval algebra, Product, Weakening
Jérôme Euzenat, Eight questions about semantic web annotations, IEEE Intelligent systems 17(2):55-62, 2002
Improving information retrieval is annotation¹s central goal. However, without sufficient planning, annotation - especially when running a robot and attaching automatically extracted content - risks producing incoherent information. The author recommends answering eight questions before you annotate. He provides a practical application of this approach, and discusses applying the questions to other systems.
Semantic web, Search by content, Content representation, Ontology, Background knowledge
Jérôme Euzenat, Laurent Tardif, XML transformation flow processing, Markup languages: theory and practice 3(3):285-311, 2002
The XSLT language is both complex to use in simple cases (like tag renaming or element hiding) and restricted in complex ones (requiring the processing of multiple stylesheets with complex information flows). We propose a framework improving on XSLT. It provides simple-to-use and easy-to-analyze macros for the basic common transformation tasks. It provides a superstructure for composing multiple stylesheets, with multiple input and output documents, in ways that are not accessible within XSLT. Having the whole transformation description in an integrated format allows to control and to analyze the complete transformation.
XML, XSLT, Transmorpher, Transformations
Jérôme Euzenat, Amedeo Napoli, Jean-François Baget, XML et les objets (Objectif XML), RSTI - L'objet 9(3):11-37, 2003
Le langage XML et les objets ont en commun la perspective de partage et de réutilisation de leur contenu grace à une plus grande structuration de celui-ci. On présente la galaxie XML : la base de XML (XML, espaces de noms, DTD et représentations internes), une structuration plus proche des modàles à objets (XMI, XML-Schema et Xquery) et des outils de modélisation apparentés aux représentations de connaissances (RDF, RDF-Schema, cartes topiques et OWL). Chaque langage présenté est mis en relation avec les efforts analogues au sein des objets.
XML, Objets
Jean-François Baget, Étienne Canaud, Jérôme Euzenat, Mohand Saïd-Hacid, Les langages du web sémantique, Information-Interaction-Intelligence HS2004, 2004
La manipulation des resources du web par des machines requiert l'expression ou la description de ces resources. Plusieurs langages sont donc définis à cet effet, ils doivent permettre d'exprimer données et méthadonnées (RDF, Cartes Topiques), de décrire les services et leur fonctionnement (UDDI, WSDL, DAML-S, etc.) et de disposer d'un modèle abstrait de ce qui est décrit grace à l'expression d'ontologies (RDFS, OWL). On présente ci-dessous l'état des travaux visant à doter le web sémantique de tels langages. On évoque aussi les questions importantes qui ne sont pas réglées à l'heure actuelle et qui méritent de plus amples travaux.
RDF, Cartes Topiques, RDFS, OWL, DAML+OIL, UDDI, WSDL, DAML-S, OWL-S, XL, XDD, Règles, Ontologies, Annotation, Sémantique, Inférence, Transformation, Robustesse
Amedeo Napoli, Bernard Carré, Roland Ducournau, Jérôme Euzenat, François Rechenmann, Objet et représentation, un couple en devenir, RSTI - L'objet 10(4):61-81, 2004
Cet article propose une étude et discussion sur la place des objets en représentation des connaissances. Il n'apporte pas de réponse complête et définitive à la question, mais se veut plutôt une synthèse constructive des travaux sur les représentations par objets réalisés jusqu'à présent. Cet article est également écrit à l'intention particulière de Jean-François Perrot, en essayant de débattre avec entrain et brio de la question actuelle des représentations par objets, des recherches et des résultats établis, des directions de recherche envisageables et de ce qui pourrait ou devrait être attendu.
Représentation des connaissances par objets, Logique de descriptions, Raisonnement par classification, Web sémantique
Pavel Shvaiko, Jérôme Euzenat, A survey of schema-based matching approaches, Journal on data semantics 4:146-171, 2005
Schema and ontology matching is a critical problem in many application domains, such as semantic web, schema/ontology integration, data warehouses, e-commerce, etc. Many different matching solutions have been proposed so far. In this paper we present a new classification of schema-based matching techniques that builds on the top of state of the art in both schema and ontology matching. Some innovations are in introducing new criteria which are based on (i) general properties of matching techniques, (ii) interpretation of input information, and (iii) the kind of input information. In particular, we distinguish between approximate and exact techniques at schema-level; and syntactic, semantic, and external techniques at element- and structure-level. Based on the classification proposed we overview some of the recent schema/ontology matching systems pointing which part of the solution space they cover. The proposed classification provides a common conceptual basis, and, hence, can be used for comparing different existing schema/ontology matching techniques and systems as well as for designing new ones, taking advantages of state of the art solutions.
Jérôme Euzenat, Jérôme Pierson, Fano Ramparany, Dynamic context management for pervasive applications, Knowledge engineering review 23(1):21-49, 2008
Pervasive computing aims at providing services for human beings that interact with their environment, encompassing objects and humans who reside in it. Applications must be able to take into account the context in which users evolve, e.g., physical location, social or hierarchical position, current tasks as well as related information. These applications have to deal with the dynamic integration in the environment of new, and sometimes unexpected, elements (users or devices). In turn, the environment has to provide context information to newly designed applications. We describe an architecture in which context information is distributed in the environment and context managers use semantic web technologies in order to identify and characterize available resources. The components in the environment maintain their own context expressed in RDF and described through OWL ontologies. They may communicate this information to other components, obeying a simple protocol for identifying them and determining the information they are capable to provide. We show how this architecture allows the introduction of new components and new applications without interrupting what is working. In particular, the openness of ontology description languages makes possible the extension of context descriptions and ontology matching helps dealing with independently developed ontologies.
Jérôme Euzenat, François Scharffe, Axel Polleres, SPARQL Extensions for processing alignments, IEEE Intelligent systems 23(6):82-84, 2008
Faisal Alkhateeb, Jean-François Baget, Jérôme Euzenat, Extending SPARQL with regular expression patterns (for querying RDF), Journal of web semantics 7(2):57-73, 2009
RDF is a knowledge representation language dedicated to the annotation of resources within the framework of the semantic web. Among the query languages for RDF, SPARQL allows querying RDF through graph patterns, i.e., RDF graphs involving variables. Other languages, inspired by the work in databases, use regular expressions for searching paths in RDF graphs. Each approach can express queries that are out of reach of the other one. Hence, we aim at combining these two approaches. For that purpose, we define a language, called PRDF (for "Path RDF") which extends RDF such that the arcs of a graph can be labeled by regular expression patterns. We provide PRDF with a semantics extending that of RDF, and propose a correct and complete algorithm which, by computing a particular graph homomorphism, decides the consequence between an RDF graph and a PRDF graph. We then define the PSPARQL query language, extending SPARQL with PRDF graph patterns and complying with RDF model theoretic semantics. PRDF thus offers both graph patterns and path expressions. We show that this extension does not increase the computational complexity of SPARQL and, based on the proposed algorithm, we have implemented a correct and complete PSPARQL query engine.
semantic web, query language, RDF, SPARQL, regular expressions
Jérôme David, Jérôme Euzenat, François Scharffe, Cássia Trojahn dos Santos, The Alignment API 4.0, Semantic web journal 2(1):3-10, 2011
Alignments represent correspondences between entities of two ontologies. They are produced from the ontologies by ontology matchers. In order for matchers to exchange alignments and for applications to manipulate matchers and alignments, a minimal agreement is necessary. The Alignment API provides abstractions for the notions of network of ontologies, alignments and correspondences as well as building blocks for manipulating them such as matchers, evaluators, renderers and parsers. We recall the building blocks of this API and present here the version 4 of the Alignment API through some of its new features: ontology proxys, the expressive alignment language EDOAL and evaluation primitives.
Jérôme Euzenat, Christian Meilicke, Pavel Shvaiko, Heiner Stuckenschmidt, Cássia Trojahn dos Santos, Ontology Alignment Evaluation Initiative: six years of experience, Journal on data semantics XV(6720):158-192, 2011
In the area of semantic technologies, benchmarking and systematic evaluation is not yet as established as in other areas of computer science, e.g., information retrieval. In spite of successful attempts, more effort and experience are required in order to achieve such a level of maturity. In this paper, we report results and lessons learned from the Ontology Alignment Evaluation Initiative (OAEI), a benchmarking initiative for ontology matching. The goal of this work is twofold: on the one hand, we document the state of the art in evaluating ontology matching methods and provide potential participants of the initiative with a better understanding of the design and the underlying principles of the OAEI campaigns. On the other hand, we report experiences gained in this particular area of semantic technologies to potential developers of benchmarking for other kinds of systems. For this purpose, we describe the evaluation design used in the OAEI campaigns in terms of datasets, evaluation criteria and workflows, provide a global view on the results of the campaigns carried out from 2005 to 2010 and discuss upcoming trends, both specific to ontology matching and generally relevant for the evaluation of semantic technologies. Finally, we argue that there is a need for a further automation of benchmarking to shorten the feedback cycle for tool developers.
Evaluation, Experimentation, Benchmarking, Ontology matching, Ontology alignment, Schema matching, Semantic technologies
Sébastien Laborie, Jérôme Euzenat, Nabil Layaïda, Semantic adaptation of multimedia documents, Multimedia tools and applications 55(3):379-398, 2011
Multimedia documents have to be played on multiple device types. Hence, usage and platform diversity requires document adaptation according to execution contexts, not generally predictable at design time. In an earlier work, a semantic framework for multimedia document adaptation was proposed. In this framework, a multimedia document is interpreted as a set of potential executions corresponding to the author specification. To each target device corresponds a set of possible executions complying with the device constraints. In this context, adapting requires to select an execution that satisfies the target device constraints and which is as close as possible from the initial composition. This theoretical adaptation framework does not specifically consider the main multimedia document dimensions, i.e., temporal, spatial and hypermedia. In this paper, we propose a concrete application of this framework on standard multimedia documents. For that purpose, we first define an abstract structure that captures the spatio-temporal and hypermedia dimensions of multimedia documents, and we develop an adaptation algorithm which transforms in a minimal way such a structure according to device constraints. Then, we show how this can be used for adapting concrete multimedia documents in SMIL through converting the documents in the abstract structure, using the adaptation algorithm, and converting it back in SMIL. This can be used for other document formats without modifying the adaptation algorithm.
Multimedia document transformation, qualitative representation and reasoning, SMIL
Jérôme Euzenat, Maria Roşoiu, Cássia Trojahn dos Santos, Ontology matching benchmarks: generation, stability, and discriminability, Journal of web semantics 21:30-48, 2013
The OAEI Benchmark test set has been used for many years as a main reference to evaluate and compare ontology matching systems. However, this test set has barely varied since 2004 and has become a relatively easy task for matchers. In this paper, we present the design of a flexible test generator based on an extensible set of alterators which may be used programmatically for generating different test sets from different seed ontologies and different alteration modalities. It has been used for reproducing Benchmark both with the original seed ontology and with other ontologies. This highlights the remarkable stability of results over different generations and the preservation of difficulty across seed ontologies, as well as a systematic bias towards the initial Benchmark test set and the inability of such tests to identify an overall winning matcher. These were exactly the properties for which Benchmark had been designed. Furthermore, the generator has been used for providing new test sets aiming at increasing the difficulty and discriminability of Benchmark. Although difficulty may be easily increased with the generator, attempts to increase discriminability proved unfruitful. However, efforts towards this goal raise questions about the very nature of discriminability.
Ontology matching, Matching evaluation, Test generation, Semantic web
Pavel Shvaiko, Jérôme Euzenat, Ontology matching: state of the art and future challenges, IEEE Transactions on knowledge and data engineering 25(1):158-176, 2013
After years of research on ontology matching, it is reasonable to consider several questions: is the field of ontology matching still making progress? Is this progress significant enough to pursue some further research? If so, what are the particularly promising directions? To answer these questions, we review the state of the art of ontology matching and analyze the results of recent ontology matching evaluations. These results show a measurable improvement in the field, the speed of which is albeit slowing down. We conjecture that significant improvements can be obtained only by addressing important challenges for ontology matching. We present such challenges with insights on how to approach them, thereby aiming to direct research into the most promising tracks and to facilitate the progress of the field.
Semantic heterogeneity, Semantic technologies, Ontology matching, Ontology alignment, Schema matching
Faisal Alkhateeb, Jérôme Euzenat, Constrained regular expressions for answering RDF-path queries modulo RDFS, International Journal of Web Information Systems 10(1):24-50, 2014
The standard SPARQL query language is currently defined for querying RDF graphs without RDFS semantics. Several extensions of SPARQL to RDFS semantics have been proposed. In this paper, we discuss extensions of SPARQL that use regular expressions to navigate RDF graphs and may be used to answer queries considering RDFS semantics. In particular, we present and compare nSPARQL and our proposal CPSPARQL. We show that CPSPARQL is expressive enough to answer full SPARQL queries modulo RDFS. Finally, we compare the expressiveness and complexity of both nSPARQL and the corresponding fragment of CPSPARQL, that we call cpSPARQL. We show that both languages have the same complexity through cpSPARQL, being a proper extension of SPARQL graph patterns, is more expressive than nSPARQL.
semantic web, query language, RDF, RDFS, SPARQL, nSPARQL, CPSPARQL, cpSPARQL, regular expression, constrained regular expression
Angela Locoro, Jérôme David, Jérôme Euzenat, Context-based matching: design of a flexible framework and experiment, Journal on data semantics 3(1):25-46, 2014
Context-based matching finds correspondences between entities from two ontologies by relating them to other resources. A general view of context-based matching is designed by analysing existing such matchers. This view is instantiated in a path-driven approach that (a) anchors the ontologies to external ontologies, (b) finds sequences of entities (path) that relate entities to match within and across these resources, and (c) uses algebras of relations for combining the relations obtained along these paths. Parameters governing such a system are identified and made explicit. They are used to conduct experiments with different parameter configurations in order to assess their influence. In particular, experiments confirm that restricting the set of ontologies reduces the time taken at the expense of recall and F-measure. Increasing path length within ontologies increases recall and F-measure as well. In addition, algebras of relations allows for a finer analysis, which shows that increasing path length provides more correct or non precise correspondences, but marginally increases incorrect correspondences.
Context-based ontology matching, Knowledge representation and interoperability, Algebras of relations, Semantic web
Jérôme Euzenat, Revision in networks of ontologies, Artificial intelligence 228:195-216, 2015
Networks of ontologies are made of a collection of logic theories, called ontologies, related by alignments. They arise naturally in distributed contexts in which theories are developed and maintained independently, such as the semantic web. In networks of ontologies, inconsistency can come from two different sources: local inconsistency in a particular ontology or alignment, and global inconsistency between them. Belief revision is well-defined for dealing with ontologies; we investigate how it can apply to networks of ontologies. We formulate revision postulates for alignments and networks of ontologies based on an abstraction of existing semantics of networks of ontologies. We show that revision operators cannot be simply based on local revision operators on both ontologies and alignments. We adapt the partial meet revision framework to networks of ontologies and show that it indeed satisfies the revision postulates. Finally, we consider strategies based on network characteristics for designing concrete revision operators.
p201. Clause 4. of the definition of a closure is incorrect. The relation was supposed to be set in the reverse direction (the standard definition is an equivalence). This mistake does not affect results.
Melisachew Wudage Chekol, Jérôme Euzenat, Pierre Genevès, Nabil Layaïda, SPARQL Query containment under schema, Journal on data semantics 7(3):133-154, 2018
Query containment is defined as the problem of determining if the result of a query is included in the result of another query for any dataset. It has major applications in query optimization and knowledge base verification. The main objective of this work is to provide sound and complete procedures to determine containment of SPARQL queries under expressive description logic schema axioms. Beyond that, these procedures are experimentally evaluated. To date, testing query containment has been performed using different techniques: containment mapping, canonical databases, automata theory techniques and through a reduction to the validity problem in logic. In this work, we use the latter technique to test containment of SPARQL queries using an expressive modal logic called mu-calculus. For that purpose, we define an RDF graph encoding as a transition system which preserves its characteristics. In addition, queries and schema axioms are encoded as mu-calculus formulae. Thereby, query containment can be reduced to testing validity in the logic. We identify various fragments of SPARQL and description logic schema languages for which containment is decidable. Additionally, we provide theoretically and experimentally proven procedures to check containment of these decidable fragments. Finally, we propose a benchmark for containment solvers which is used to test and compare the current state-of-the-art containment solvers.
SPARQL, Query containment, mu-Calculus
Manuel Atencia, Jérôme David, Jérôme Euzenat, Amedeo Napoli, Jérémy Vizzini, Link key candidate extraction with relational concept analysis, Discrete applied mathematics 273:2-20, 2020
Linked data aims at publishing data expressed in RDF (Resource Description Framework) at the scale of the worldwide web. These datasets interoperate by publishing links which identify individuals across heterogeneous datasets. Such links may be found by using a generalisation of keys in databases, called link keys, which apply across datasets. They specify the pairs of properties to compare for linking individuals belonging to different classes of the datasets. Here, we show how to recast the proposed link key extraction techniques for RDF datasets in the framework of formal concept analysis. We define a formal context, where objects are pairs of resources and attributes are pairs of properties, and show that formal concepts correspond to link key candidates. We extend this characterisation to the full RDF model including non functional properties and interdependent link keys. We show how to use relational concept analysis for dealing with cyclic dependencies across classes and hence link keys. Finally, we discuss an implementation of this framework.
Formal Concept Analysis, Relational Concept Analysis, Linked data, Link key, Data interlinking, Resource Description Framework
Jomar da Silva, Kate Revoredo, Fernanda Araujo Baião, Jérôme Euzenat, Alin: improving interactive ontology matching by interactively revising mapping suggestions, Knowledge engineering review 35:e1, 2020
Ontology matching aims at discovering mappings between the entities of two ontologies. It plays an important role in the integration of heterogeneous data sources that are described by ontologies. Interactive ontology matching involves domain experts in the matching process. In some approaches, the expert provides feedback about mappings between ontology entities, i.e., these approaches select mappings to present to the expert who replies which of them should be accepted or rejected, so taking advantage of the knowledge of domain experts towards finding an alignment. In this paper, we present Alin, an interactive ontology matching approach which uses expert feedback not only to approve or reject selected mappings, but also to dynamically improve the set of selected mappings, i.e., to interactively include and to exclude mappings from it. This additional use for expert answers aims at increasing in the benefit brought by each expert answer. For this purpose, Alin uses four techniques. Two techniques were used in previous versions of Alin to dynamically select concept and attribute mappings. Two new techniques are introduced in this paper: one to dynamically select relationship mappings and another one to dynamically reject inconsistent selected mappings using anti-patterns. We compared Alin with state-of-the-art tools, showing that it generates alignment of comparable quality.
Ontology matching, WordNet, Interactive ontology matching, Ontology alignment, Interactive ontology alignment
Jérôme Euzenat, A map without a legend: the semantic web and knowledge evolution, Semantic web journal 11(1):63-68, 2020
The current state of the semantic web is focused on data. This is a worthwhile progress in web content processing and interoperability. However, this does only marginally contribute to knowledge improvement and evolution. Understanding the world, and interpreting data, requires knowledge. Not knowledge cast in stone for ever, but knowledge that can seamlessly evolve; not knowledge from one single authority, but diverse knowledge sources which stimulate confrontation and robustness; not consistent knowledge at web scale, but local theories that can be combined. We discuss two different ways in which semantic web technologies can greatly contribute to the advancement of knowledge: semantic eScience and cultural knowledge evolution.
Semantic web, Linked data, Big data, Open data, Knowledge representation, Knowledge, Ontology, Machine learning, Reproducible research, eScience, Cultural evolution
Armen Inants, Jérôme Euzenat, So, what exactly is a qualitative calculus?, Artificial intelligence 289:103385, 2020
The paradigm of algebraic constraint-based reasoning, embodied in the notion of a qualitative calculus, is studied within two alternative frameworks. One framework defines a qualitative calculus as "a non-associative relation algebra (NA) with a qualitative representation", the other as "an algebra generated by jointly exhaustive and pairwise disjoint (JEPD) relations". These frameworks provide complementary perspectives: the first is intensional (axiom-based), whereas the second one is extensional (based on semantic structures). However, each definition admits calculi that lie beyond the scope of the other. Thus, a qualitatively representable NA may be incomplete or non-atomic, whereas an algebra generated by JEPD relations may have non-involutive converse and no identity element. The divergence of definitions creates a confusion around the notion of a qualitative calculus and makes the "what" question posed by Ligozat and Renz actual once again. Here we define the relation-type qualitative calculus unifying the intensional and extensional approaches. By introducing the notions of weak identity, inference completeness and Q-homomorphism, we give equivalent definitions of qualitative calculi both intensionally and extensionally. We show that "algebras generated by JEPD relations" and "qualitatively representable NAs" are embedded into the class of relation-type qualitative algebras.
Algebraic constraint-based reasoning, Qualitative reasoning, Qualitative calculus, Relation algebra
Manuel Atencia, Jérôme David, Jérôme Euzenat, On the relation between keys and link keys for data interlinking, Semantic web journal 12(4):547-567, 2021
Both keys and their generalisation, link keys, may be used to perform data interlinking, i.e. finding identical resources in different RDF datasets. However, the precise relationship between keys and link keys has not been fully determined yet. A common formal framework encompassing both keys and link keys is necessary to ensure the correctness of data interlinking tools based on them, and to determine their scope and possible overlapping. In this paper, we provide a semantics for keys and link keys within description logics. We determine under which conditions they are legitimate to generate links. We provide conditions under which link keys are logically equivalent to keys. In particular, we show that data interlinking with keys and ontology alignments can be reduced to data interlinking with link keys, but not the other way around.
Ontology alignment, Key, Link key, Data interlinking
Line van den Berg, Manuel Atencia, Jérôme Euzenat, A logical model for the ontology alignment repair game, Autonomous agents and multi-agent systems 35(2):32, 2021
Ontology alignments enable agents to communicate while preserving heterogeneity in their knowledge. Alignments may not be provided as input and should be able to evolve when communication fails or when new information contradicting the alignment is acquired. The Alignment Repair Game (ARG) has been proposed for agents to simultaneously communicate and repair their alignments through adaptation operators when communication failures occur. ARG has been evaluated experimentally and the experiments showed that agents converge towards successful communication and improve their alignments. However, whether the adaptation operators are formally correct, complete or redundant could not be established by experiments. We introduce a logical model, Dynamic Epistemic Ontology Logic (DEOL), that enables us to answer these questions. This framework allows us (1) to express the ontologies and alignments used via a faithful translation from ARG to DEOL, (2) to model the ARG adaptation operators as dynamic modalities and (3) to formally define and establish the correctness, partial redundancy and incompleteness of the adaptation operators in ARG.
The refine operator is not partially redundant with respect to Agent b (because it has no way to detect the incoherence from the announcement alone).
Ontology alignment, Alignment repair, Multi-agent systems, Agent communication, Dynamic Epistemic Logic
Line van den Berg, Manuel Atencia, Jérôme Euzenat, Raising awareness without disclosing truth, Annals of mathematics and artificial intelligence 91(4):431-464, 2023
Agents use their own vocabularies to reason and talk about the world. Public signature awareness is satisfied if agents are aware of the vocabularies, or signatures, used by all agents they may, eventually, interact with. Multi-agent modal logics and in particular Dynamic Epistemic Logic rely on public signature awareness for modeling information flow in multi-agent systems. However, this assumption is not desirable for dynamic and open multi-agent systems because (1) it prevents agents to use unique signatures other agents are unaware of, (2) it prevents agents to openly extend their signatures when encountering new information, and (3) it requires that all future knowledge and beliefs of agents are bounded by the current state. We propose a new semantics for awareness that enables us to drop public signature awareness. This semantics is based on partial valuation functions and weakly reflexive relations. Dynamics for raising public and private awareness are then defined in such a way as to differentiate between becoming aware of a proposition and learning its truth value. With this, we show that knowledge and beliefs are not affected through the raising operations.
Awareness, Raising awareness, Dynamic epistemic logic, Partial valuations, Multi-agent systems
Line van den Berg, Jérôme Euzenat, Class? en classe: jouer avec des classifications pour combiner mathématiques et informatique, Recherches et recherches-actions en didactique de l'informatique 1(1), 2024
Class? est un jeu dans lequel les joueurs doivent organiser leurs cartes en fonction d’une classification cachée. Les cartes posées par les autres joueurs leur permettent de deviner où mettre les leurs. Il a été conçu pour que des écoliers appréhendent que les mêmes objets peuvent être classés de différentes manières et qu’il est possible de transmettre une classification sans l’expliciter. Le jeu fait appel a des notions qui se présentent facilement à l’aide des cartes à jouer comme des ensembles définis par des conditions nécessaires et suffisantes (classes). Cela permet d’introduire des classifications hiérarchiques et des notions algorithmiques (tests de conditions, récursion) pour les manipuler. Enfin, il nécessite de raisonner logiquement sur ces notions. Class? a été joué avec succès par des élèves du CM2 à la seconde. Nous nous sommes donc posé la question de son positionnement en tant que ressource pédagogique. Il apparait tout d’abord qu’il ne semble pas illustrer de concepts particulièrement mis en avant par les programmes officiels de l’éducation nationale. Il offre plutôt une manière alternative de renforcer des notions transversales très importantes en informatique. Nous caractérisons Class? par rapport aux efforts d’informatique sans ordinateurs et à d’autres jeux utilisables à cette fin. Finalement, nous discutons d’une décomposition de Class? en une succession de jeux plus simples permettant d’introduire les notions impliquées l’une après l’autre.
Classification hiérarchique, Apprentissage par le jeu, Informatique sans ordi, Ordres partiels, Conditions nécessaires et suffisantes
Books/Monographies
Jérôme Euzenat, Pavel Shvaiko, Ontology matching, Springer-Verlag, Heidelberg (DE), 333p., 2007
Jérôme Euzenat, Pavel Shvaiko, Ontology matching, Springer-Verlag, Heidelberg (DE), 520p., 2013
Book chapters and collected papers/Chapitres de livres
Jérôme Euzenat, Building consensual knowledge bases: context and architecture, in: Nicolaas Mars (ed), Towards very large knowledge bases, IOS press, Amsterdam (NL), 1995, pp143-155
A protocol and architecture are presented in order to achieve consensual knowledge bases (i.e. bases in which knowledge is expressed in a formal language and which are considered as containing the state of the art in some research area). It assumes that the construction of the base must and can be achieved collectively. The architecture is based on individual workstations which provide support for developing a knowledge base: formal expression of knowledge through objects, tasks and qualitative equations annotated with hypertext nodes and links. It also provides tools for detecting similarities and inconsistencies between pieces of knowledge. These bases can be grouped together in order to constitute a new reference knowledge base. The process for constructing this last base mimics the submission of articles to peer-reviewed journals. This is achieved through a protocol for submitting knowledge to the group base, confronting it with the content of that base, amending it accordingly, reviewing it by the other knowledge bases and finally incorporating it. The system is to be used by researchers in the field of genome sequencing.
CSCW, knowledge sharing, knowledge revision, negotiation, protocol, knowledge communication
Petko Valtchev, Jérôme Euzenat, Classification of concepts through products of concepts and abstract data types, in: Edwin Diday, Yves Lechevalier, Otto Opitz (eds), Ordinal and symbolic data analysis, Studies in classification, data analysis, and knowledge organisation series, Springer Verlag, Heidelberg (DE), 1996, pp3-12
The classification scheme formalism represents in a uniform manner both usual data types and structured objects is introduced. It is here provided with a dissimilarity measure which only takes into account the structure of a given domain: a partial order over a set of classes. The measure we define compares a couple of individuals according to their mutual position within the taxonomy structuring the underlying domain. It is then used to design a classification algorithm to work on structured objects.
Jérôme Euzenat, Représentation de connaissance par objets, in: Roland Ducournau, Jérôme Euzenat, Gérald Masini, Amedeo Napoli (éds), Langages et modèles à objets: état des recherches et perspectives, INRIA, Rocquencourt (FR), 1998, pp293-319
Les systèmes de représentation de connaissance sont utilisés pour modéliser symboliquement un domaine particulier. Certains d'entre eux utilisent la notion d'objet comme structure principale. On trace ici les traits principaux de tels systèmes, en évoquant les systèmes marquants. L'exposé approfondit ensuite un système particulier, TROEPS, en abordant d'abord les problèmes que la conception de ce système cherche à résoudre. TROEPS est présenté en considérant les constructions et les mécanismes d'inférence qu'il met en oeuvre.
Représentation de connaissance, classification, filtrage, type, WWW, hypertexte, logiques de descriptions, réseaux sémantiques, schémas, identité, nommage, inférence, évolution, spécialisation, points de vue, passerelles
Jérôme Euzenat, Construction collaborative de bases de connaissance et de documents pour la capitalisation, in: Manuel Zacklad, Michel Grundstein (éds), Ingénierie et capitalisation des connaissances, Hermès Science publisher, Paris (FR), 2001, pp25-48
L'activité de "mémoire technique" est destinée à recevoir la connaissance technique utilisée par les ingénieurs de l'entreprise. Ces mémoires techniques participent de la problématique de la gestion des connaissances ("knowledge management") en ce qu'elles permettent d'accroître les capacités de capitalisation et de gestion de la connaissance et des expériences au sein des entreprises. Une telle mémoire se doit d'être vivante si elle doit être utilisée ou enrichie. Elle doit donc être cohérente et intelligible. L'approche de la mémoire technique présentée ici est nourrie de notre expérience de la construction de bases de connaissance. À cette fin, trois principes sont ici mis en avant : la mémoire technique doit être autant que possible formalisée, elle doit être liée aux sources de connaissance informelle, elle doit exprimer le consensus d'une communauté. On présente brièvement comment le prototype CO4 répond à ces exigences en permettant l'édition de connaissance formalisée sur le world-wide web, la référence des entités modélisées vers des sources informelles et la mise en oeuvre d'un protocole de collaboration destiné à encourager le consensus entre les acteurs.
Mémoire technique, mémoire d'entreprise, serveur de connaissance, consensus, éditeur de connaissance, world-wide web, TROEPS, CO4, collaboration protocol
Dominique Deneux, Christophe Lerch, Jérôme Euzenat, Jean-Paul Barthès, Pluralité des connaissances dans les systèmes industriels, in: René Soënen, Jacques Perrin (éds), Coopération et connaissance dans les systèmes industriels : une approche interdisciplinaire, Hermès Science publisher, Paris (FR), 2002, pp115-129
Jérôme Euzenat, An infrastructure for formally ensuring interoperability in a heterogeneous semantic web, in: Isabel Cruz, Stefan Decker, Jérôme Euzenat, Deborah McGuinness (eds), The emerging semantic web, IOS press, Amsterdam (NL), 302p., 2002, pp245-260
Because different applications and different communities require different features, the semantic web might have to face the heterogeneity of languages for expressing knowledge. Yet, it will be necessary for many applications to use knowledge coming from different sources. In such a context, ensuring the correct understanding of imported knowledge on a semantic ground is very important. We present here an infrastructure based on the notions of transformations from one language to another and of properties satisfied by transformations. We show, in the particular context of semantic properties and description logics markup language, how it is possible (1) to define transformation properties, (2) to express, in a form easily processed by machine, the proof of a property and (3) to construct by composition a proof of properties satisfied by compound transformations. All these functions are based on extensions of current web standard languages.
Jérôme Euzenat, Heiner Stuckenschmidt, The `family of languages' approach to semantic interoperability, in: Borys Omelayenko, Michel Klein (eds), Knowledge transformation for the semantic web, IOS press, Amsterdam (NL), 2003, pp49-63
Different knowledge representation languages can be used for different semantic web applications. Exchanging knowledge thus requires specific techniques established on a semantic ground. We present the `family of languages' approach based on a set of knowledge representation languages whose partial ordering depends on the transformability from one language to another by preserving a particular property. For the same set of languages, there can be several such structures based on the property selected for structuring the family. Properties of different strength allow performing practicable but well founded transformations. The approach offers the choice of the language in which a representation will be imported and the composition of available transformations between the members of the family.
Semantic interoperability, ontology sharing, knowledge transformation, ontology patterns
Jérôme Euzenat, Raphaël Troncy, Web sémantique et pratiques documentaires, in: Jean-Claude Le Moal, Bernard Hidoine, Lisette Calderan (éds), Publier sur internet, ABDS, Paris (FR), 2004, pp157-188
Le web sémantique a l'ambition de construire pour les machines l'infrastructure correspondant au web actuel et d'offrir aux humains la puissance des machines pour gérer l'information disponible dans ce web. Les technologies du web sémantique ont donc beaucoup à offrir pour assister les pratiques documentaires à venir. On présentera les technologies destinées à décrire les ressources du web et leurs ontologies dans la perspective de leur utilisation à des fins de gestion documentaires. On présentera certaines ressources déjà existantes pouvant être utilisées dans ce but ainsi qu'une application à l'indexation de données multimédia et audiovisuelles.
Web sémantique, OWL, RDF, Ontologie, Publication, Indexation, MPEG-7
Jérôme Euzenat, Angelo Montanari, Time granularity, in: Michael Fisher, Dov Gabbay, Lluis Vila (eds), Handbook of temporal reasoning in artificial intelligence, Elsevier, Amsterdam (NL), 2005, pp59-118
A temporal situation can be described at different levels of abstraction depending on the accuracy required or the available knowledge. Time granularity can be defined as the resolution power of the temporal qualification of a statement. Providing a formalism with the concept of time granularity makes it possible to model time information with respect to differently grained temporal domains. This does not merely mean that one can use different time units - e.g., months and days - to represent time quantities in a unique flat temporal model, but it involves more difficult semantic issues related to the problem of assigning a proper meaning to the association of statements with the different temporal domains of a layered temporal model and of switching from one domain to a coarser/finer one. Such an ability of providing and relating temporal representations at different "grain levels" of the same reality is both an interesting research theme and a major requirement for many applications (e.g. agent communication or integration of layered specifications). After a presentation of the general properties required by a multi-granular temporal formalism, we discuss the various issues and approaches to time granularity proposed in the literature. We focus on the main existing formalisms for representing and reasoning about quantitative and qualitative time granularity: the general set-theoretic framework for time granularity developed by Bettini et al and Montanari's metric and layered temporal logic for quantitative time granularity, and Euzenat's relational algebra granularity conversion operators for qualitative time granularity. The relationships between these systems and others are then explored. At the end, we briefly describe some applications exploiting time granularity, and we discuss related work on time granularity in the areas of formal specifications of real-time systems, temporal databases, and data mining.
Jérôme Euzenat, L'annotation formelle de documents en (8) questions, in: Régine Teulier, Jean Charlet, Pierre Tchounikine (éds), Ingénierie des connaissances, L'Harmattan, Paris (FR), 2005, pp251-271
Annoter un ensemble de documents informels à l'aide de représentations formelles appelle plusieurs questions qui doivent trouver une réponse si l'on veut développer un système cohérent. Ces questions sont liées à la forme et à l'objet des représentations retenues, à la nécessité d'utiliser de la connaissance indépendante du contenu des documents (ontologies, connaissance de contexte) et au statut du système résultant (grande base de connaissance ou éléments de connaissance distribués). Ces questions sont décrites et illustrées par l'annotation de résumés d'articles en génétique moléculaire.
Web sémantique, recherche de documents par le contenu, annotation formelle, représentation du contenu, ontologie, connaissance de contexte
Jérôme Euzenat, Adrian Mocan, François Scharffe, Ontology alignments: an ontology management perspective, in: Martin Hepp, Pieter De Leenheer, Aldo De Moor, York Sure (eds), Ontology management: semantic web, semantic web services, and business applications, Springer, New-York (NY US), 2008, pp177-206
Relating ontologies is very important for many ontology-based applications and more important in open environments like the semantic web. The relations between ontology entities can be obtained by ontology matching and represented as alignments. Hence, alignments must be taken into account in ontology management. This chapter establishes the requirements for alignment management. After a brief introduction to matching and alignments, we justify the consideration of alignments as independent entities and provide the life cycle of alignments. We describe the important functions of editing, managing and exploiting alignments and illustrate them with existing components.
ontology matching, ontology alignment, alignment management, alignment server, ontology mediation, mapping
Sébastien Laborie, Jérôme Euzenat, An incremental framework for adapting the hypermedia structure of multimedia documents, in: Manolis Wallace, Marios Angelides, Phivos Mylonas (eds), Advances in Semantic Media Adaptation and Personalization, Springer, Heidelberg (DE), 2008, pp157-176
The multiplication of presentation contexts (such as mobile phones, PDAs) for multimedia documents requires the adaptation of document specifications. In an earlier work, a semantic approach for multimedia document adaptation was proposed. This framework deals with the semantics of the document composition by transforming the relations between multimedia objects. In this chapter, we apply the defined framework to the hypermedia dimension of documents, i.e., hypermedia links between multimedia objects. By considering hypermedia links as particular objects of the document, we adapt the hypermedia dimension with the temporal dimension. However, due to the non-deterministic character of the hypermedia structure, the document is organized in several loosely dependent sub-specifications. To preserve the adaptation framework, we propose a first straightforward strategy that consists of adapting all sub-specifications generated by the hypermedia structure. Nevertheless, this strategy has several drawbacks, e.g., the profile is not able to change between user interactions. Hence, we propose an incremental approach which adapts document sub-specifications step by step according to these interactions. To validate this framework, we adapt real standard multimedia documents such as SMIL documents.
Jérôme Euzenat, Onyeari Mbanefo, Arun Sharma, Sharing resources through ontology alignment in a semantic peer-to-peer system, in: Yannis Kalfoglou (ed), Cases on semantic interoperability for information systems integration: practice and applications, IGI Global, Hershey (PA US), 2009, pp107-126
Relating ontologies is very important for many ontology-based applications and more important in open environments like the semantic web. The relations between ontology entities can be obtained by ontology matching and represented as alignments. Hence, alignments must be taken into account in ontology management. This chapter establishes the requirements for alignment management. After a brief introduction to matching and alignments, we justify the consideration of alignments as independent entities and provide the life cycle of alignments. We describe the important functions of editing, managing and exploiting alignments and illustrate them with existing components.
Semantic peer-to-peer system, semantic annotation, ontology, heterogeneous annotation, resource sharing, ontology alignment, ontology matching, query, peer data management system, alignment composition, alignment inverse, PicSter, semantic web
Cássia Trojahn dos Santos, Jérôme Euzenat, Valentina Tamma, Terry Payne, Argumentation for reconciling agent ontologies, in: Atilla Elçi, Mamadou Koné, Mehmet Orgun (eds), Semantic Agent Systems, Springer, New-York (NY US), 2011, pp89-111
Within open, distributed and dynamic environments, agents frequently encounter and communicate with new agents and services that were previously unknown. However, to overcome the ontological heterogeneity which may exist within such environments, agents first need to reach agreement over the vocabulary and underlying conceptualisation of the shared domain, that will be used to support their subsequent communication. Whilst there are many existing mechanisms for matching the agents' individual ontologies, some are better suited to certain ontologies or tasks than others, and many are unsuited for use in a real-time, autonomous environment. Agents have to agree on which correspondences between their ontologies are mutually acceptable by both agents. As the rationale behind the preferences of each agent may well be private, one cannot always expect agents to disclose their strategy or rationale for communicating. This prevents the use of a centralised mediator or facilitator which could reconcile the ontological differences. The use of argumentation allows two agents to iteratively explore candidate correspondences within a matching process, through a series of proposals and counter proposals, i.e., arguments. Thus, two agents can reason over the acceptability of these correspondences without explicitly disclosing the rationale for preferring one type of correspondences over another. In this chapter we present an overview of the approaches for alignment agreement based on argumentation.
Faisal Alkhateeb, Jérôme Euzenat, Querying RDF data, in: Sherif Sakr, Eric Pardede (eds), Graph data management: techniques and applications, IGI Global, Hershey (PA US), 2012, pp337-356
This chapter provides an introduction to the RDF language as well as surveys the languages that can be used for querying RDF graphs. Then it reviews some of the languages that can be used for querying RDF and provides a comparison between these query languages.
RDF, RDF Model, Querying RDF, SPARQL, SPARQL Extensions
Jérôme Euzenat, Chan Le Duc, Methodological guidelines for matching ontologies, in: Maria Del Carmen Suárez Figueroa, Asunción Gómez Pérez, Enrico Motta, Aldo Gangemi (eds), Ontology engineering in a networked world, Springer, Heidelberg (DE), 2012, pp257-278
Finding alignments between ontologies is a very important operation for ontology engineering. It allows for establishing links between ontologies, either to integrate them in an application or to relate developed ontologies to context. It is even more critical for networked ontologies. Incorrect alignments may lead to unwanted consequences throughout the whole network and incomplete alignments may fail to provide the expected consequences. Yet, there is no well established methodology available for matching ontologies. We propose methodological guidelines that build on previously disconnected results and experiences.
Jérôme Euzenat, Marie-Christine Rousset, Web sémantique, in: Pierre Marquis, Odile Papini, Henri Prade (éds), L'IA: frontières et applications, Cepadues, Toulouse (FR), 2014,
Le web sémantique ambitionne de rendre le contenu du web accessible au calcul. Il ne s'agit rien moins que de représenter de la connaissance à l'échelle du web. Les principales technologies utilisées dans ce cadre sont: la représentation de connaissance assertionnelle à l'aide de graphes, la définition du vocabulaire de ces graphes à l'aide d'ontologies, la connexion des représentations à travers le web, et leur appréhension pour interpréter la connaissance ainsi exprimée et répondre à des requêtes. Les techniques d'intelligence artificielle, et principalement de représentation de connaissances, y sont donc mises à contribution et à l'épreuve. En effet, elles sont confrontées à des problèmes typiques du web tels que l'échelle, l'hétérogénéité, l'incomplétude, l'incohérence et la dynamique. Ce chapitre propose une courte présentation de l'état du domaine et renvoie aux autres chapitres concernant les technologies mises en oeuvre dans le web sémantique.
RDF, OWL, RDF Model, Querying RDF, SPARQL, SPARQL Extensions
Jérôme Euzenat, First experiments in cultural alignment repair (extended version), in: Valentina Presutti, Eva Blomqvist, Raphaël Troncy, Harald Sack, Ioannis Papadakis, Anna Tordai (eds), ESWC 2014 satellite events revised selected papers, Springer Verlag, Heidelberg (DE), 2014, pp115-130
Alignments between ontologies may be established through agents holding such ontologies attempting at communicating and taking appropriate action when communication fails. This approach, that we call cultural repair, has the advantage of not assuming that everything should be set correctly before trying to communicate and of being able to overcome failures. We test here the adaptation of this approach to alignment repair, i.e., the improvement of incorrect alignments. For that purpose, we perform a series of experiments in which agents react to mistakes in alignments. The agents only know about their ontologies and alignments with others and they act in a fully decentralised way. We show that cultural repair is able to converge towards successful communication through improving the objective correctness of alignments. The obtained results are on par with a baseline of a priori alignment repair algorithms.
The results of [
20140305-NOOR] are not correct due to various software bugs and the generated reference alignments. New results are [
20180308-NOOR] and [
20170208b-NOOR]. Conclusions hold for the former, they are more favorable to agents for the latter.
Ontology alignment, Alignment repair, Cultural knowkedge evolution, Agent simulation, Coherence, Network of ontologies
Maria Roşoiu, Jérôme David, Jérôme Euzenat, A linked data framework for Android, in: Elena Simperl, Barry Norton, Dunja Mladenic, Emanuele Della Valle, Irini Fundulaki, Alexandre Passant, Raphaël Troncy (eds), The Semantic Web: ESWC 2012 Satellite Events, Springer Verlag, Heidelberg (DE), 2015, pp204-218
Mobile devices are becoming major repositories of personal information. Still, they do not provide a uniform manner to deal with data from both inside and outside the device. Linked data provides a uniform interface to access structured interconnected data over the web. Hence, exposing mobile phone information as linked data would improve the usability of such information. We present an API that provides data access in RDF, both within mobile devices and from the outside world. This API is based on the Android content provider API which is designed to share data across Android applications. Moreover, it introduces a transparent URI dereferencing scheme, exposing content outside of the device. As a consequence, any application may access data as linked data without any a priori knowledge of the data source.
Olga Kovalenko, Jérôme Euzenat, Semantic matching of engineering data structures, in: Stefan Biffl, Marta Sabou (eds), Semantic web technologies for intelligent engineering applications, Springer, Heidelberg (DE), 2016, pp137-157
An important element of implementing a data integration solution in multi-disciplinary engineering settings, consists in identifying and defining relations between the different engineering data models and data sets that need to be integrated. The ontology matching field investigates methods and tools for discovering relations between semantic data sources and representing them. In this chapter, we look at ontology matching issues in the context of integrating engineering knowledge. We first discuss what types of relations typically occur between engineering objects in multi-disciplinary engineering environments taking a use case in the power plant engineering domain as a running example. We then overview available technologies for mappings definition between ontologies, focusing on those currently most widely used in practice and briefly discuss their capabilities for mapping representation and potential processing. Finally, we illustrate how mappings in the sample project in power plant engineering domain can be generated from the definitions in the Expressive and Declarative Ontology Alignment Language (EDOAL).
Ontology matching, Correspondence, Alignment, Mapping, Ontology integration, Data transformation, Complex correspondences, Ontology mapping languages, Procedural and declarative languages, EDOAL
Jérôme Euzenat, Knowledge diversity under socio-environmental pressure, in: Michael Rovatsos (ed), Investigating diversity in AI: the ESSENCE project, 2013-2017, Deliverable, ESSENCE, 62p., 2017, pp28-30
Experimental cultural evolution has been convincingly applied to the evolution of natural language and we aim at applying it to knowledge. Indeed, knowledge can be thought of as a shared artefact among a population influenced through communication with others. It can be seen as resulting from contradictory forces: internal consistency, i.e., pressure exerted by logical constraints, against environmental and social pressure, i.e., the pressure exerted by the world and the society agents live in. However, adapting to environmental and social pressure may lead agents to adopt the same knowledge. From an ecological perspective, this is not particularly appealing: species can resist changes in their environment because of the diversity of the solutions that they can offer. This problem may be approached by involving diversity as an internal constraint resisting external pressure towards uniformity.
Jérôme Euzenat, Marie-Christine Rousset, Semantic web, in: Pierre Marquis, Odile Papini, Henri Prade (eds), A guided tour of artificial intelligence research, Springer, Berlin (DE), 575p., 2020, pp181-207
The semantic web aims at making web content interpretable. It is no less than offering knowledge representation at web scale. The main ingredients used in this context are the representation of assertional knowledge through graphs, the definition of the vocabularies used in graphs through ontologies, and the connection of these representations through the web. Artificial intelligence techniques and, more specifically, knowledge representation techniques, are put to use and to the test by the semantic web. Indeed, they have to face typical problems of the web: scale, heterogeneity, incompleteness, and dynamics. This chapter provides a short presentation of the state of the semantic web and refers to other chapters concerning those techniques at work in the semantic web.
RDF, OWL, RDF Model, Querying RDF, SPARQL, SPARQL Extensions
Book coordination/Coordination d'ouvrages
Roland Ducournau, Jérôme Euzenat, Gérald Masini, Amedeo Napoli (éds), Langages et modèles à objets: états des recherches et perspectives, Collection Didactique 19, INRIA, Rocquencourt (FR), 527p., 1998
Isabel Cruz, Stefan Decker, Jérôme Euzenat, Deborah McGuinness (eds), The emerging semantic web, IOS press, Amsterdam (NL), 302p., 2002
The World Wide Web has been the main source of an important shift in the way people get information and order services. However, the current Web is aimed at people only. The Semantic Web is a Web defined and linked in a way that it can be used by machines not just for display purposes, but also for automation, integration and reuse of data across various applications. Facilities and technologies to put machine understandable data on the Web are rapidly becoming a high priority for many communities. In order for computers to provide more help to people, the Semantic Web augments the current Web with formalized knowledge and data that can be processed by computers. It thus needs a language for expressing knowledge. This knowledge is used to describe the content of information sources, through ontologies, and the condition of operation of Web services. One of the challenges of the current Semantic Web development is the design of a framework that allows these resources to interoperate. This book presents the state of the art in the development of the principles and technologies that will allow for the Semantic Web to become a reality. It contains revised versions of a selection of papers presented at the International Semantic Web Working Symposium that address the issues of languages, ontologies, services, and interoperability.
Journal special issue/Éditeur invité
Jérôme Euzenat, Amedeo Napoli (éds), XML et les objets. La voie vers le web sémantique?, RSTI - L'objet (numéro spécial) 9(3):1-122, 2003
XML, Objets
Jérôme Euzenat, Bernard Carré (éds), Langages et modèles à objets 2004 (actes 10e conférence), RSTI - L'objet (numéro spécial) 10(2-3):1-275, 2004
Objets
Pavel Shvaiko, Jérôme Euzenat (eds), Special issue on Ontology matching, International journal of semantic web and information systems (special issue) 3(2):1-122, 2007
Michelle Cheatham, Isabel Cruz, Jérôme Euzenat, Catia Pesquita (eds), Special issue on ontology and linked data matching, Semantic web journal (special issue) 8(2):183-251, 2017
Alvaro Sicilia, Pieter Pauwels, Leandro Madrazo, María Poveda Villalón, Jérôme Euzenat (eds), Special Issue on Semantic Technologies and Interoperability in the Build Environment, Semantic web journal (special issue) 9(6):729-855, 2018
Isabelle Bloch, Jérôme Euzenat, Jérôme Lang, François Schwarzentruber (éds), Post-actes de la Conférence Nationale en Intelligence Artificielle (CNIA 2018-2020), Revue ouverte d'intelligence artificielle (numéro spécial) 3(3-4):193-413, 2022
Conference editing/Publication d'actes
Isabel Cruz, Stefan Decker, Jérôme Euzenat, Deborah McGuinness (eds), Semantic web working symposium (Proc. conference on Semantic Web Working Symposium (SWWS)), Stanford (CA US), 597p., 2001
Jérôme Euzenat, Asunción Gómez Pérez, Nicola Guarino, Heiner Stuckenschmidt (eds), Ontologies and semantic interoperability (Proc. ECAI workshop on Ontologies and semantic interoperability), Lyon (FR), 597p., 2002
Jérôme Euzenat, Carole Goble, Asunción Gómez Pérez, Manolis Koubarakis, David De Roure, Mike Wooldridge (eds), Semantic intelligent middleware for the web and the grid (Proc. ECAI workshop on Semantic intelligent middleware for the web and the grid (SIM)), Valencia (ES), 2004
York Sure, Óscar Corcho, Jérôme Euzenat, Todd Hughes (eds), Evaluation of Ontology-based tools (Proc. 3rd ISWC2004 workshop on Evaluation of Ontology-based tools (EON)), Hiroshima (JP), 97p., 2004
Benjamin Ashpole, Marc Ehrig, Jérôme Euzenat, Heiner Stuckenschmidt (eds), Proceedings K-Cap workshop on integrating ontologies (Proc. K-Cap workshop on integrating ontologies), Banff (CA), 105p., 2005
Asunción Gómez Pérez, Jérôme Euzenat (eds), The semantic web: research and applications (Proc. 2nd conference on european semantic web conference (ESWC)), Lecture notes in computer science 3532, 2005
Pavel Shvaiko, Jérôme Euzenat, Alain Léger, Deborah McGuinness, Holger Wache (eds), Context and ontologies: theory and practice (Proc. AAAI workshop on Context and ontologies: theory and practice), Pittsburg (PA US), 143p., 2005
Jérôme Euzenat, John Domingue (eds), Artificial intelligence: methodology, systems and applications (Proc. 12th conference on Artificial intelligence: methodology, systems and applications (AIMSA)), Lecture notes in computer science 4183, 2006
Pavel Shvaiko, Jérôme Euzenat, Alain Léger, Deborah McGuinness, Holger Wache (eds), Context and ontologies: theory and practice (Proc. ECAI workshop on Context and ontologies: theory and practice), Riva del Garda (IT), 88p., 2006
Pavel Shvaiko, Jérôme Euzenat, Natalya Noy, Heiner Stuckenschmidt, Richard Benjamins, Michael Uschold (eds), Proc. 1st ISWC 2006 international workshop on ontology matching (OM), Athens (GA US), 245p., 2006
Paolo Bouquet, Jérôme Euzenat, Chiara Ghidini, Deborah McGuinness, Valeria de Paiva, Luciano Serafini, Pavel Shvaiko, Holger Wache (eds), Proc. 3rd Context workshop on Context and ontologies: representation and reasoning (C&O:RR), Roskilde (DK), 77p., 2007
Also Roskilde University report RU/CS/RR 115
Jérôme Euzenat, Jean-Marc Petit, Marie-Christine Rousset (éds), Actes atelierEGC 2007 sur Passage à l'échelle des techniques de découverte de correspondances (DECOR), Namur (BE), 83p., 2007
Pavel Shvaiko, Jérôme Euzenat, Fausto Giunchiglia, Bin He (eds), Proc. 2nd ISWC 2007 international workshop on ontology matching (OM), Busan (KR), 308p., 2007
Paolo Bouquet, Jérôme Euzenat, Chiara Ghidini, Deborah McGuinness, Valeria de Paiva, Gulin Qi, Luciano Serafini, Pavel Shvaiko, Holger Wache, Alain Léger (eds), Proc. 4th ECAI workshop on Context and ontologies (C&O), Patras (GR), 38p., 2008
Aldo Gangemi, Jérôme Euzenat (eds), Knowledge engineering: practice and patterns (Proc. 16th International conference on knowledge engineering and knowledge management (EKAW)), Lecture notes in artificial intelligence 5268, 2008
Pavel Shvaiko, Jérôme Euzenat, Fausto Giunchiglia, Heiner Stuckenschmidt (eds), Proc. 3rd ISWC international workshop on ontology matching (OM), Karlsruhe (DE), 258p., 2008
Pavel Shvaiko, Jérôme Euzenat, Fausto Giunchiglia, Heiner Stuckenschmidt, Natalya Noy, Arnon Rosenthal (eds), Proc. 4th ISWC workshop on ontology matching (OM), Chantilly (VA US), 271p., 2009
Pavel Shvaiko, Jérôme Euzenat, Fausto Giunchiglia, Heiner Stuckenschmidt, Ming Mao, Isabel Cruz (eds), Proc. 5th ISWC workshop on ontology matching (OM), Shanghai (CN), 255p., 2010
Pavel Shvaiko, Isabel Cruz, Jérôme Euzenat, Tom Heath, Ming Mao, Christoph Quix (eds), Proc. 6th ISWC workshop on ontology matching (OM), Bonn (DE), 264p., 2011
Philippe Cudré-Mauroux, Jeff Heflin, Evren Sirin, Tania Tudorache, Jérôme Euzenat, Manfred Hauswirth, Josiane Xavier Parreira, James Hendler, Guus Schreiber, Abraham Bernstein, Eva Blomqvist (eds), The semantic web (Proc. 11th conference on International semantic web conference (ISWC)), Lecture notes in computer science 7649, 2012
Philippe Cudré-Mauroux, Jeff Heflin, Evren Sirin, Tania Tudorache, Jérôme Euzenat, Manfred Hauswirth, Josiane Xavier Parreira, James Hendler, Guus Schreiber, Abraham Bernstein, Eva Blomqvist (eds), The semantic web (Proc. 11th conference on International semantic web conference (ISWC)), Lecture notes in computer science 7650, 2012
Pavel Shvaiko, Jérôme Euzenat, Anastasios Kementsietsidis, Ming Mao, Natalya Noy, Heiner Stuckenschmidt (eds), Proc. 7th ISWC workshop on ontology matching (OM), Boston (MA US), 253p., 2012
Pavel Shvaiko, Jérôme Euzenat, Kavitha Srinivas, Ming Mao, Ernesto Jiménez-Ruiz (eds), Proc. 8th ISWC workshop on ontology matching (OM), Sydney (NSW AU), 249p., 2013
Pavel Shvaiko, Jérôme Euzenat, Ming Mao, Ernesto Jiménez-Ruiz, Juanzi Li, Axel-Cyrille Ngonga Ngomo (eds), Proc. 9th ISWC workshop on ontology matching (OM), Riva del Garda (IT), 187p., 2014
Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh (eds), Proc. 10th ISWC workshop on ontology matching (OM), Bethlehem (PA US), 239p., 2016
Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh, Ryutaro Ichise (eds), Proc. 11th ISWC workshop on ontology matching (OM), Kobe (JP), 252p., 2016
Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh (eds), Proc. 12th ISWC workshop on ontology matching (OM), Wien (AT), 225p., 2017
Kemo Adrian, Jérôme Euzenat, Dagmar Gromann (eds), Proc. 1st JOWO workshop on Interaction-Based Knowledge Sharing (WINKS), Bozen-Bolzano (IT), 42p., 2018
Jérôme Euzenat, François Schwarzentruber (éds), Actes Conférence NationaleAFIA sur d'Intelligence Artificielle et Rencontres Jeunes Chercheurs en Intelligence Artificielle (CNIA+RJCIA), Nancy (FR), 133p., 2018
Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh (eds), Proc. 13th ISWC workshop on ontology matching (OM), Monterey (CA US), 227p., 2018
Kemo Adrian, Jérôme Euzenat, Dagmar Gromann, Ernesto Jiménez-Ruiz, Marco Schorlemmer, Valentina Tamma (eds), Proc. 2nd JOWO workshop on Interaction-Based Knowledge Sharing (WINKS), Graz (AT), 48p., 2019
Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Oktie Hassanzadeh, Cássia Trojahn dos Santos (eds), Proc. 14th ISWC workshop on ontology matching (OM), Auckland (NZ), 210p., 2020
Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Oktie Hassanzadeh, Cássia Trojahn dos Santos (eds), Proc. 15th ISWC workshop on ontology matching (OM), Athens (GR), 253p., 2020
Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Oktie Hassanzadeh, Cássia Trojahn dos Santos (eds), Proc. 16th ISWC workshop on ontology matching (OM), (online), 218p., 2021
Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Oktie Hassanzadeh, Cássia Trojahn dos Santos (eds), Proc. 17th ISWC workshop on ontology matching (OM), (online), 230p., 2022
Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Oktie Hassanzadeh, Cássia Trojahn dos Santos (eds), Proc. 18th ISWC workshop on ontology matching (OM), Athens (GR), 202p., 2023
Conference papers/Communication à des conférences
Jérôme Euzenat, François Rechenmann, Maintenance de la vérité dans les systèmes à base de connaissance centrée-objet, in: Actes 6e congrèsAFCET-INRIA sur Reconnaissance des Formes et Intelligence Artificielle (RFIA), Antibes (FR), pp1095-1109, 1987
Le raisonnement non monotone est souvent une conséquence de la connexion des systèmes à base de connaissance à des systèmes informatiques extérieurs. Ces derniers sont en effet susceptibles d'agir sur les données et les connaissances de la base. Les systèmes de maintenance de la vérité (truth maintenance systems) possèdent certaines fonctionnalités requises pour gérer la non monotonie. Ils sont évalués dans le contexte d'une utilisation des représentations centrées-objet. Les caractéristiques de ces dernières (héritage, attachement procédural, valeurs par défaut, attributs multi-valués), et en particulier du modèle retenu dans le système Shirka, amènent à des solutions spécifiques.
maintenance de la vérité, TMS, raisonnement non monotone, représentations centrées-objet
Jérôme Euzenat, Étendre le TMS (vers les contextes), in: Actes 7e congrèsAFCET-INRIA sur Reconnaissance des Formes et Intelligence Artificielle (RFIA), Paris (FR), pp581-586, 1989
Les systèmes de maintenance de la vérité ont été conçus pour raisonner à l'aide de connaissance incomplète. Un système de maintenance de la vérité qui combine les avantages des TMS - autorisant l'utilisation d'inférences non monotones - et des ATMS - considérant le raisonnement sous plusieurs contextes simultanément - est présenté. Il maintient un graphe de dépendances entre les objets utilisés par un système de raisonnement et propage à travers ce graphe les contextes dans lesquels les noeuds sont valides. Une théorie de l'interprétation des contextes est présentée. Elle garantit certaines bonnes propriétés aux contextes manipulés par l'implémentation. Les réponses aux requêtes peuvent alors être interprétées sur la base théorique ainsi posée.
Raisonnement hypothétique, Raisonnement multi-monde, Raisonnement non monotone, Systèmes de maintenance de la vérité
Laurent Buisson, Jérôme Euzenat, A quantitative analysis of reasoning for RMSes, in: Proc. 6th International Symposium poster session on Methodologies for Intelligent Systems (ISMIS), Charlotte (NC US), (, Technical memorandum ORNL TM-11938, Martin Marietta Oak Ridge National Laboratory, Oak Ridge (TN US), 1991), pp9-20, 1991
For reasoning systems, it is sometime useful to cache away the inferred values. Meanwhile, when the system works in a dynamic environment, cache coherence has to be performed, and this can be achieved with the help of a reasoning maintenance system (RMS). The questions to be answered, before implementing such a system for a particular application, are: how much is caching useful ? Does the system need a dynamicity management system ? Is a RMS suited (what will be its overhead) ?
We provide an application driven evaluation framework in order to answer these questions. The evaluation is based on the real work to be processed on the reasoning of the application. First, we express the action of caching and maintaining with two concepts: backward and forward cone effects. Then we quantify the inference time for those systems and find the quantification of the cone effects in the formulas.
Jérôme Euzenat, Libero Maesano, An architecture for selective forgetting, in: Proc. 8th SSAISB conference on Artificial Intelligence and Simulation of Behavior (AISB), Leeds (UK), pp117-128, 1991
Some knowledge based systems will have to deal with increasing amount of knowledge. In order to avoid memory overflow, it is necessary to clean memory of useless data. Here is a first step toward an intelligent automatic forgetting scheme. The problem of the close relation between forgetting and inferring is addressed, and a general solution is proposed. It is implemented as invalidation operators for reasoning maintenance system dependency graphs. This results in a general architecture for selective forgetting which is presented in the framework of the Sachem system.
Jérôme Euzenat, Contexts for nonmonotonic RMSes, in: Proc. 12th International Joint Conference on Artificial Intelligence (IJCAI), Sydney (AU), pp300-305, 1991
A new kind of RMS, based on a close merge of TMS and ATMS, is proposed. It uses the TMS graph and interpretation and the ATMS multiple context labelling procedure. In order to fill in the problems of the ATMS environments in presence of nonmonotonic inferences, a new kind of environment, able to take into account hypotheses that do not hold, is defined. These environments can inherit formulas that hold as in the ATMS context lattice. The dependency graph can be interpreted with regard to these environments; so every node can be labelled. Furthermore, this leads to consider several possible interpretations of a query.
Jérôme Euzenat, SaMaRis: visualiser et manipuler interactivement le raisonnement, in: Actes 3e convention sur intelligence artificielle (CIA), Paris (FR), pp219-238, 1991
Jérôme Euzenat, Laurent Buisson, SaMaRis: un environnement pour l'expérimentation et l'étude du maintien des raisonnements, in: Actes 8e congrèsAFCET-INRIA-ARC-AFIA sur Reconnaissance des Formes et Intelligence Artificielle (RFIA), Villeurbanne (FR), pp1233-1247, 1991
SaMaRis est un logiciel destiné à l'étude et à l'expérimentation des systèmes de maintien du raisonnement, ou de tout autre type de systèmes tirant parti d'une représentation explicite d'un raisonnement afin de lui faire subir des opérations constructives (rétablissement de la cohérence), destructives (oubli) ou consultatives (explication). Son architecture est composée de quatre modules indépendants: le protocole de communication avec le système d'inférence, le graphe de dépendances représentant le raisonnement lui-même, les services associés au graphe et les applications générales sur ce graphe. SaMaRis n'a aucune connaissance de la sémantique associée au graphe par le système d'inférence, ainsi son action peut-elle être adaptée à divers types de raisonnements.
Laurent Buisson, Jérôme Euzenat, The ELSA avalanche path analysis system: an experiment with reason maintenance and object-based representations (extended abstract), in: Proc. ECAI workshop on Applications of Reason Maintenance Systems, Wien (OS), 1992
ELSA is an application concerning avalanche path analysis which takes advantages of a reason maintenance system. In order to fully describe it, the tool on which the ELSA application is developed - Shirka/TMS - is first described. It is noteworthy that the RMS on Shirka is a special one. It is only used for cache consistency maintenance. As a consequence, the importance of Shirka/TMS in ELSA is in preserving cache consistency rather than defaults assumptions and backtracking. Then, the processing of the ELSA system is presented, emphasizing on the use of the RMS: the RMS of Shirka is critical for the performances of the whole system. This is illustrated in the third part in which are given some comparison of the use of ELSA with and without its RMS, in order to highlight the advantages of such a device.
Jérôme Euzenat, Michel Le, Éric Mazeran, Michel Weinberg, Generic embedding of an uncertain calculus in objects and rules, in: Proc. 1st Singapoore International Conference on Intelligent Systems (SPICE), Singapore (SG), pp177-182, 1992
While symbolic knowledge representation and reasoning methods are necessary for almost any kind of knowledge-based application, they often lack numerically represented uncertainty and vagueness. Meanwhile, different applications would require different numeric calculi. SMECI Uncertain Module (SUM) enables to embed an uncertain (or graded) calculus into a multi-paradigm environment (including tasks, rules, objects and multiple-worlds), allowing therefore the object model to take into account uncertain values so that the inference engine can draw uncertain inferences from uncertain and vague premises. The originality of SUM is that it does not make strong assumptions about the calculus used, which only has to respect some fundamental "format" expressed through the design of basic objects and the instantiation of a set of generic primitives. Therefore, SUM is not restricted to numeric truth values but can deal with any kind of values provided with an implementation of the generic interface.
uncertainty, vagueness, fuzzy logic, object-based knowledge representation, inference engine
Jérôme Euzenat, A purely taxonomic and descriptive meaning for classes, in: Proc. IJCAI workshop on object-based representation systems, Chambéry (FR), (Amedeo Napoli (ed), object-based representation systems, Rapport de recherche 93-R-156, CRIN, Nancy (FR), 1993), pp81-92, 1993
Three different aspects of classes in object-based systems arestudied: the distinction between classes and instances, the separation of ontological from taxonomic function of classes and their descriptive or definitional meaning. The advantages of using a descriptive and taxonomic meaning for classes are advocated. One of the important reasons for separating ontology from taxonomy is the multiplicity of taxonomies over a same set of objects and the independence of objects from these taxonomies. These distinctions ground the semantics of the object-based representation system TROPES. The specialisation relation in TROPES is examined under this light and the classification mechanism is interpreted under the descriptive setting. It is shown that the use of a descriptive semantics of classes can support a semantics for the classification mechanism. In fact, there is no intrinsic superiority of definition over description: the precision of the former is balanced by the generality of the later.
Specialisation, Classification, Categorisation, Instantiation, Descriptive classes, Definitional classes
Jérôme Euzenat, Définition abstraite de la classification et son application aux taxonomies d'objets, in: Actes 2e journéesEC2 sur représentations par objets (RPO), La Grande-Motte (FR), pp235-246, 1993
La notion de système classificatoire est introduite comme généralisation de la classification dans les systèmes de représentation de connaissance. Sa définition ne dépend d'aucun modèle de connaissance. Les contraintes qui peuvent lui être ajoutées dans un modèle particulier sont examinées sous la forme de propriétés sémantiques, de structures graphiques et de problèmes d'incomplétude venant entacher les propriétés sémantiques. Ces seules contraintes permettront d'établir certaines propriétés (univocité, déterminance) de l'opération de classification et de concevoir les algorithmes en conséquence. Enfin, le système classificatoire est instancié de deux façons extrêmement différentes dans le cadre du modèle TROPES. La diversité de ces deux dernières interprétations est déjà un exemple de la généralité de cette définition.
Objet, Classification, Taxonomie, Catégorisation
Jérôme Euzenat, Brief overview of T-tree: the Tropes Taxonomy building Tool, in: Proc. 4th ASIS SIG/CR workshop on classification research, Columbus (OH US), (rev. Philip Smith, Clare Beghtol, Raya Fidel, Barbara Kwasnik (eds), Advances in classification research 4, Information today, Medford (NJ US), 1994), pp69-87, 1994
TROPES is an object-based knowledge representation system. It allows the representation of multiple taxonomies over the same set of objects through viewpoints and provides tools for classification (identification) of objects and categorisation (classification) of classes from their descriptions. T-TREE is an extension of TROPES for the construction of taxonomies from objects. Data analysis algorithms consider TROPES objects for producing TROPES taxonomies. Thus, data analysis is integrated into the knowledge representation system. Moreover, the original bridge notion permits the comparison and connection of adjacent taxonomies.
Automated techniques to assist in creating classification scheme, Knowledge representation schemes, Classification algorithms, Software for management of classification schemes, Comparison and compatibility between classification scheme
Jérôme Euzenat, KR and OOL co-operation based on semantics non reducibility, in: Proc. ECAI workshop on integrating object-orientation and knowledge representation, Amsterdam (NL), 1994
We argue that, due to semantics non reducibility, object based-knowledge representation systems (OBKR) and object-oriented programming languages (OOL) cannot be reduced one to another. However, being aware of this incompatibility allows to organise their cohabitation and co-operation accordingly. This is illustrated through the design of a new implementation of the TROPES system.
Jérôme Euzenat, Classification dans les représentations par objets: produits de systèmes classificatoires, in: Actes 9e congrèsAFCET-AFIA-ARC-INRIA sur Reconnaissance des Formes et Intelligence Artificielle (RFIA), Paris (FR), pp185-196, 1994
Les systèmes classificatoires représentent la structure supportant une activité de classification. Ils sont définis non pas à partir de la structure des entités à classer mais à partir de l'activité de classification elle-même. Ils prennent en compte la taxonomie dans laquelle est menée la classification et la construction de cette taxonomie. La notion de système classificatoire est étendue à l'aide d'opérations de produit et de projection qui engendrent de nouveaux systèmes classificatoires de telle sorte que les propriétés de ceux-ci leurs sont applicables. Les classifications multiples et composées sont ainsi caractérisées par un système classificatoire produit et des algorithmes peuvent être directement inférés de la composition des systèmes. L'exemple de TROPES permet de montrer comment la classification multi-points de vue d'objets composés est élaborée comme un produit de systèmes classificatoires à partir de systèmes classificatoires primitifs correspondant aux types de données.
Classification, Taxonomie, Catégorisation, Systèmes classificatoires, TROPES, Produits de systèmes classificatoires
Cécile Capponi, Jérôme Euzenat, Jérôme Gensel, Objects, types and constraints as classification schemes (abstract), in: Proc. 1st international symposium on Knowledge Retrieval, Use, and Storage for Efficiency (KRUSE), Santa-Cruz (CA US), pp69-73, 1995
The notion of classification scheme is a generic model that encompasses the kind of classification performed in many knowledge representation formalisms. Classification schemes abstract from the structure of individuals and consider only a sub-categorization relationship. The product of classification schemes preserves the status of classification scheme and provides various classification algorithms which rely on the classification defined for each member of the product. Object-based representation formalisms often use heterogeneous ways of representing knowledge. In the particular case of the TROPES system, knowledge is expressed by classes, types and constraints. Here is presented the way to express types and constraints in a type description module which provides them with the simple structure of classification schemes. This mapping allows the integration into TROPES of new types and constraints together with their sub-typing relation. Afterwards, taxonomies of classes are themselves considered to be classification schemes which are products of more primitive ones. Then, this information is sufficient for classifying TROPES objects.
Class, object, type, constraint, classification scheme, sub-type inference
Bernard Carré, Roland Ducournau, Jérôme Euzenat, Amedeo Napoli, François Rechenmann, Classification et objets: programmation ou représentation?, in: Actes 5e journées nationalesPRC-GDR intelligence artificielle , Nancy (FR), pp213-237, 1995
Jérôme Euzenat, An algebraic approach to granularity in time representation, in: Proc. 2nd IEEE international workshop on temporal representation and reasoning (TIME), Melbourne (FL US), pp147-154, 1995
Any phenomenon can be seen under a more or less precise granularity, depending on the kind of details which are perceivable. This can be applied to time. A characteristic of abstract spaces such as the one used for representing time is their granularity independence, i.e. the fact that they have the same structure at different granularities. So, time "places" and their relationship can be seen under different granularities and they still behave like time places and relationship under each granularity. However, they do not remain exactly the same time places and relationship. Here is presented a pair of operators for converting (upward and downward) qualitative time relationship from one granularity to another. These operators are the only ones to satisfy a set of six constraints which characterize granularity changes.
Jérôme Euzenat, A categorical approach to time representation: first study on qualitative aspects, in: Proc. IJCAI workshop on spatial and temporal reasoning, Montréal (CA), pp145-152, 1995
The qualitative time representation formalisms are considered from the viewpoint of category theory. The representation of a temporal situation can be expressed as a graph and the relationship holding between that graph and others (imprecise or coarser) views of the same situation are expressed as morphisms. These categorical structures are expected to be combinable with other aspects of knowledge representation providing a framework for the integration of temporal representation tools and formalisms with other areas of knowledge representation.
Category theory, time representation, temporal granularity, interval algebra
Jérôme Euzenat, An algebraic approach for granularity in qualitative time and space representation, in: Proc. 14th International Joint Conference on Artificial Intelligence (IJCAI), Montréal (CA), pp894-900, 1995
Any phenomenon can be seen under a more or less precise granularity, depending on the kind of details which are perceivable. This can be applied to time and space. A characteristic of abstract spaces such as the one used for representing time is their granularity independence, i.e. the fact that they have the same structure under different granularities. So, time "places" and their relationships can be seen under different granularities and they still behave like time places and relationships under each granularity. However, they do not remain exactly the same time places and relationships. Here is presented a pair of operators for converting (upward and downward) qualitative time relationships from one granularity to another. These operators are the only ones to satisfy a set of six constraints which characterize granularity changes. They are also shown to be useful for spatial relationships.
Jérôme Euzenat, François Rechenmann, Shirka, 10 ans, c'est Tropes ?, in: Actes 2e journées sur langages et modèles à objets (LMO), Nancy (FR), pp13-34, 1995
Il y a dix ans, apparaissait le système de représentation de connaissance SHIRKA. À
travers la présentation de sa conception, de son évolution et de son utilisation, on tente d'établir
ce que peut être, dix ans plus tard, un système de représentation de connaissance. La mise en oeuvre
de deux points clé de SHIRKA - la séparation programmation-représentation et l'utilisation de l'objet
partout où cela est possible - est particulièrement étudiée. Ceci permet de considérer leur pertinence
et leur évolution pour la représentation de connaissance.
Jérôme Euzenat, Acquérir pour représenter (et raisonner) ou représenter pour acquérir?, in: Actes 6e journées sur acquisition de connaissances (JAC), Grenoble (FR), pp283-285, 1995
Petko Valtchev, Jérôme Euzenat, Classification of concepts through products of concepts and abstract data types (abstract), in: Proc. 1st international conference on data analysis and ordered structures, Paris (FR), pp131-134, 1995
The classification scheme formalism represents in a uniform manner both usual data types and structured objects is introduced. It is here provided with a dissimilarity measure which only takes into account the structure of a given domain: a partial order over a set of classes. The measure we define compares a couple of individuals according to their mutual position within the taxonomy structuring the underlying domain. It is then used to design a classification algorithm to work on structured objects.
Isabelle Crampé, Jérôme Euzenat, Révision interactive dans une base de connaissance à objets, in: Actes 10e congrèsAFCET-AFIA-ARC-INRIA sur Reconnaissance des Formes et Intelligence Artificielle (RFIA), Rennes (FR), pp615-623, 1996
Lors de la construction d'une base de connaissance, la présence d'une inconsistance peut laisser l'utilisateur démuni car il ne peut embrasser l'étendue de la base. Afin de résoudre ce problème, nous proposons un outil lui indiquant les solutions possibles. Les principes de la révision en logique s'appliquent à cette problématique, mais des résultats plus satisfaisants sont envisageables. En effet, afin d'obtenir des solutions minimisant la perte de connaissance, nous allons nous appuyer sur les structures impliquées dans les représentations par objet (ordre de spécialisation, inclusion des domaines). Par ailleurs, la prise en compte des préférences de l'utilisateur et de son statut permet d'organiser la recherche de solutions.
révision, représentation de connaissance par objet, interaction système-utilisateur
Isabelle Crampé, Jérôme Euzenat, Fondements de la révision dans un langage d'objets simple, in: Actes 3e journées sur langages et modèles à objets (LMO), Leysin (CH), pp134-149, 1996
La révision d'une base de connaissance, rendue inconsistante suite à l'ajout d'une assertion, consiste à la rendre consistante en la modifiant. Résoudre ce problème est très utile dans l'assistance aux utilisateurs de bases de connaissance et s'appliquerait avec profit dans le contexte des objets. Afin de poser les bases d'un tel mécanisme, une représentation par objets minimale est formalisée. Elle est dotée de mécanismes d'inférence et d'une caractérisation syntaxique de l'inconsistance et de l'incohérence. La notion de base de connaissance révisée est définie sur ce langage. Un critère de minimalité, à la fois sémantique et syntaxique, permet de définir les bases révisées les plus proches de la base initiale.
Jérôme Euzenat, Knowledge bases as Web page backbones, in: Proc. WWW workshop on artificial intelligence-based tools to help W3 users, Paris (FR), 1996
Jérôme Euzenat, Corporate memory through cooperative creation of knowledge bases and hyper-documents, in: Proc. 10th workshop on knowledge acquisition (KAW), Banff (CA), pp(36)1-18, 1996
Best paper of the corporate memory and enterprise modelling track
The Co4 system is dedicated to the representation of formal knowledge in an object and task based manner. It is fully interleaved with hyper-documents and thus provides integration of formal and informal knowledge. Moreover, consensus about the content of the knowledge bases is enforced with the help of a protocol for integrating knowledge through several levels of consensual knowledge bases. Co4 is presented here as addressing three claims about corporate memory: (1) it must be formalised to the greatest possible extent so that its semantics is clear and its manipulation can be automated; (2) it cannot be totally formalised and thus formal and informal knowledge must be organised such that they refer to each other; (3) in order to be useful, it must be accepted by the people involved (providers and users) and thus must be non contradictory and consensual.
Jérôme Euzenat, HyTropes: a WWW front-end to an object knowledge management system, in: Proc. 10th demonstration track on knowledge acquisition workshop (KAW), Banff (CA), pp(62)1-12, 1996
HyTropes is a HTTP server allowing the manipulation of a knowledge base written in the Tropes object-based representation language through the world-wide web. It allows the navigation through the knowledge base as well as the invocation of search queries (filters). The display can be customised in order to best suit the needs of the applications. HyTropes will be demonstrated through three prototypic knowledge bases: ColiGene and FirstFly devoted to the genetic regulation of various organisms and STG, a bibliographic knowledge base.
Christian Bessière, Jérôme Euzenat, Robert Jeansoulin, Gérard Ligozat, Sylviane Schwer, Raisonnement spatial et temporel, in: Actes 6e journées nationalesPRC-GDR intelligence artificielle , Grenoble (FR), pp77-88, 1997
Bien que, ou parce que, toutes les activités et toutes les perceptions humaines sont relatives au temps et à l'espace, ni les philosophes, ni les scientifiques n'en fournissent de définition unanime. Kant conçoit l'espace et le temps comme des conditions nécessaires de l'expérience humaine, qui ne porte jamais sur la réalité en soi, mais sur les phénomènes qu'on perçoit. Pour Pascal ce sont des choses premières qu'il est impossible, voire inutile de définir.
Le temps et l'espace sont des modalités fondamentale de l'existence et de la connaissance que l'on en a. A défaut de les définir, les hommes se sont attachés au cours des siècles à les mesurer. Ces approches métriques, numériques, ont été l'enjeu de travaux considérables pour gagner en précision. Pour autant l'absence de précision dans la localisation, n'a jamais empêché de constater - qualitativement - que le temps et l'espace sont source de relations entre les objets et les événements.
Les représentations de ces approches qualitatives n'ont reçu de formalisation mathématique que vers la fin du siècle dernier, où Henri Poincaré fonde les bases des travaux ultérieurs sur la relativité comme sur la topologie.
Ces approches qualitatives focalisent le travail du groupe Kanéou, en termes de représentation (logique, modèles, langue naturelle), de traitement (CSP, multi-agents) et d'application (diagnostic, aménagement, systèmes d'information géographique).
Jérôme Euzenat, Christophe Chemla, Bernard Jacq, A knowledge base for D. melanogaster gene interactions involved in pattern formation, in: Proc. 5th international conference on intelligent systems for molecular biology (ISMB), Halkidiki (GR), pp108-119, 1997
The understanding of pattern formation in Drosophila requires the handling of the many genetic and molecular interactions which occur between developmental genes. For that purpose, a knowledge base (KNIFE) has been developed in order to structure and manipulate the interaction data. KNIFE contains data about interactions published in the literature and gathered from various databases. These data are structured in an object knowledge representation system into various interrelated entities. KNIFE can be browsed through a WWW interface in order to select, classify and examine the objects and their references in other bases. It also provides specialised biological tools such as interaction network manipulation and diagnosis of missing interactions.
Jérôme Euzenat, Influence des classes intermédiaires dans les tests de classification, in: Actes 4e poster session sur langages et modèles à objets (LMO), Roscoff (FR), 1997
Dans le cadre d'une tâche de conception de hiérarchie, on mets en évidence l'influence des classes intermédiaires (ayant des sous-classes) sur le type de taxonomie obtenue (avec ou sans multi-spécialisation).
construction de taxonomie, génie logiciel, protocole expérimental
Amedeo Napoli, Isabelle Crampé, Roland Ducournau, Jérôme Euzenat, Michel Leclère, Philippe Vismara, Aspects actuels des représentations de connaissances par objets et de la classification, in: Actes 6e journées nationalesPRC-GDR intelligence artificielle , Grenoble (FR), pp289-314, 1997
Cet article présente certains thèmes de recherches étudiés par les membres du groupe "Objets et classification" du PRC-IA. Ces thèmes concernent essentiellement la théorie des systèmes de représentation de connaissances par objets (RCPO), la révision d'une base de connaissances dans les systèmes de RCPO, la classification de classes et d'instances, et la mise en oeuvre d'applications, illustrée ici par le système RESYN. Les travaux présentés montrent une certaine continuité avec les préoccupations des membres du groupe depuis qu'il existe. L'article se termine par la présentation d'éléments de définition d'un système de RCPO, et de perspectives de recherches découlant des thèmes explicités dans l'article.
Petko Valtchev, Jérôme Euzenat, Dissimilarity measure for collections of objects and values, in: Proc. 2nd international symposium on intelligent data analysis (IDA), London (UK), (Xiaohui Liu, Paul Cohen, Michael Berthold (eds), Advances in intelligent data analysis, reasoning about data, Lecture notes in computer science 1280, 1997), pp259-272, 1997
Automatic classification may be used in object knowledge bases in order to suggest hypothesis about the structure of the available object sets. Yet its direct application meets some difficulties due to the way data is represented: attributes relating objects, multi-valued attributes, non-standard and external data types used in object descriptions. We present here an approach to the automatic classification of objects based on a specific dissimilarity model. The topological measure, presented in a previous paper, accounts for both object relations and the variety of available data types. In this paper, the extension of the topological measure on multi-valued object attributes, e.g. lists or sets, is presented. The resulting dissimilarity is completely integrated in the knowledge model TROPES which enables the definition of a classification strategy for an arbitrary knowledge base built on top of TROPES.
Isabelle Crampé, Jérôme Euzenat, Object knowledge base revision, in: Proc. 13th european conference on artificial intelligence (ECAI), Brighton (UK), pp3-7, 1998
A revision framework for object-based knowledge representation languages is presented. It is defined by adapting logical revision to objects and characterised both semantically and syntactically. The syntactic analysis of revision shows that it can be easily interpreted in terms of object structures (e.g. moving classes or enlarging domains). This is the source of the implementation and it enables users to be involved in the revision process.
Jérôme Euzenat, Algèbres d'intervalles sur des domaines temporels arborescents, in: Actes 11e congrèsAFCET-AFIA sur Reconnaissance des Formes et Intelligence Artificielle (RFIA), Clermont-Ferrand (FR), pp385-394, 1998
Afin de concilier les algèbres d'intervalles temporels avec un modèle temporel arborescent, on présente une algèbre d'intervalles dont le modèle du temps est ordonné par un ordre partiel. Elle est ensuite déclinée suivant l'orientation de l'arborescence. L'approche utilisée est classique puisqu'elle consiste à produire une algèbre d'instants dans chacun de ces cas et de " passer à l'intervalle ". Elle est cependant complexifiée par l'introduction de la notion de voisinage conceptuel dont le passage à l'intervalle nécessite de nouveaux développements. De plus, la symétrie passé/futur dans le cas arborescent est nettement mise en évidence et, en particulier, dissociée de la symétrie des relations réciproques.
Algèbres de relations, algèbres d'intervalles, temps arborescent, domaine temporel partiellement ordonné, restriction
Jérôme Euzenat, Des arbres qui cachent des forêts : remarques sur l'organisation hiérarchique de la connaissance, in: Mohamed Hassoun, Omar Larouk, Jean-Paul Metzger (éds), Actes 2e poster sessionchapitre français de l'ISKO , Lyon (FR), pp213-215, 1999
Petko Valtchev, Jérôme Euzenat, Une stratégie de construction de taxonomies dans les objets, in: Actes 7e rencontressociété française de classification (SFC), Nancy (FR), pp307-314, 1999
Construire automatiquement une taxonomie de classes à partir d'objets co-définis et indiférenciables n'est pas une tâche aisée. La partition de l'ensemble d'objets en domaines et la hiérarchisation de ces domaines par la relation de composition permettent de différencier les objets et d'éviter certains cycles impliquant une relation de composition. Par ailleurs, l'utilisation d'une dissimilarité bâtie sur les taxonomies de classes existantes dans certains domaines permet d'éviter de traiter d'autres cycles. Il subsite cependant des références circulaires qui sont alors circonscrites à une partie bien identifiée des domaines.
Patrick Bougé, Dominique Deneux, Christophe Lerch, Jérôme Euzenat, Jean-Paul Barthès, Michel Tollenaere, Localisation des connaissances dans les systèmes de production: approches multiples pour différents types de connaissance, in: Jacques Perrin, René Soënen (éds), Actes journéesProsper sur Gestion de connaissances, coopération, méthodologie de recherches interdisciplinaires, Toulouse (FR), pp31-50, 2000
La gestion des connaissances s'instancie de manière extrèmement variée au sein des entreprises et elle mobilise des disciplines tout aussi variées. Les connaissances considérées par les différentes approches peuvent être très différentes. On peut se demander si cet état de fait est dû aux approches mises en oeuvre ou exigé par la variété des applications englobées par la gestion de connaissance. On considère un ensemble de projets pouvant être considérés comme relevant de la gestion de connaissance restreinte au cadre des systèmes de productions. On observe tout d'abord qu'ils s'attachent à résoudre des problèmes différents par des méthodes différentes. De plus, la corrélation semble faible entre les disciplines et les connaissances d'une part et entre les problèmes et les disciplines d'autre part.
Farid Cerbah, Jérôme Euzenat, Using terminology extraction techniques for improving traceability from formal models to textual requirements, in: Proc. 5th international conference on applications of natural language to information systems (NLDB), Versailles (FR), (Mokrane Bouzeghoub, Zoubida Kedad, Élisabeth Métais (eds), Natural Language Processing and Information Systems, Lecture notes in computer science 1959, 2001), pp115-126, 2000
This article deals with traceability in sotfware engineering. More precisely, we concentrate on the role of terminological knowledge the mapping between (informal) textual requirements and (formal) object models. We show that terminological knowledge facilitates production of traceability links, provided that language processing technologies allow to elaborate semi-automatically the required terminological resources. The presented system is one step towards incremental formalization from textual knowledge.
XML, terminology, knowledge extraction
Farid Cerbah, Jérôme Euzenat, Integrating textual knowledge and formal knowledge for improving traceability, in: Proc. ECAI workshop on Knowledge Management and Organizational Memory, Berlin (DE), pp10-16, 2000
This article deals with traceability in knowledge repositories. More precisely, we concentrate on the role of terminological knowledge in the mapping between (informal) textual requirements and (formal) object models. We show that terminological knowledge facilitates the production of traceability links, provided that language processing technologies allow to elaborate semi-automatically the required terminological resources. The presented system is one step towards incremental formalization from textual knowledge. As such, it is a valuable tool for building knowledge repositories.
XML, terminology, knowledge extraction
Farid Cerbah, Jérôme Euzenat, Integrating textual knowledge and formal knowledge for improving traceability, in: Proc. 12th international conference on knowledge engineering and knowledge management (EKAW), Juan-les-Pins (FR), (Rose Dieng, Olivier Corby (eds), Knowledge engineering and knowledge management: methods, models and tools, Lecture notes in computer science 1937, 2000), pp296-303, 2000
Knowledge engineering often concerns the translation of informal knowledge into a formal representation. This translation process requires support for itself and for its We pretend that inserting a terminological structure between informal textual documents and their formalization serves both of these goals. Modern terminology extraction tools support the process where the terms are a first sketch of formalized concepts. Moreover, the terms can be used for linking the concepts and the pieces of texts. This is exemplified through the presentation of an implemented system.
XML, terminology, knowledge extraction
Jérôme Euzenat, XML est-il le langage de représentation de connaissance de l'an 2000?, in: Actes 6e journées sur langages et modèles à objets (LMO), Mont Saint-Hilaire (CA), pp59-74, 2000
De nombreuses applications (représentation du contenu, définition de vocabulaire) utilisent XML pour transcrire la connaissance et la communiquer telle quelle ou dans des contextes plus larges. Le langage XML est considéré comme un langage universel et sa similarité avec les systèmes à objets a été remarquée. XML va-t-il donc remplacer les langages de représentation de connaissance? Un exemple concret permet de présenter quelques questions et problèmes posés par la transcription d'un formalisme de représentation de connaissance par objets en XML. Les solutions possibles de ces problèmes sont comparées. L'avantage et la lacune principale d'XML étant son absence de sémantique, une solution à ce problème est ébauchée.
Jérôme Euzenat, Problèmes d'intelligibilité et solutions autour de XML, in: Paul Kopp (éd), Actes séminaireCNES sur Valorisation des données, Labège (FR), 2000
Les problèmes d'intelligibilité et d'interopérabilité que pose et que résout le langage XML sont examinés en explorant progressivement les travaux destinés à les résoudre: XML en tant que langage universel, permet théoriquement l'interopérabilité. Mais XML, métalangage sans sémantique, n'offre aucune possibilité d'intelligibilité pour qui (humain ou programme) ne connaît pas le contenu. XML-Schéma n'améliore que l'interopérabilité en définissant très précisément les types de données (et parfois leurs unités). RDF, langage de description de ressources, est destiné à "ajouter de la sémantique" mais n'en dispose pas lui-même. Il sera donc très difficile (lire impossible) pour un programme de l'interpréter. Plusieurs initiatives indépendantes du W3C s'attachent à produire des langages de descriptions de contenu cette fois-ci dotés d'une sémantique rigoureuse. Ce faisant, ces langages réduisent drastiquement leurs champs d'utilisation et par conséquent les possibilités d'interopérabilité des documents les utilisant. Si le temps est suffisant, on pourra présenter brièvement (a) une proposition de langage de description de la sémantique destiné à préserver l'interopérabilité en améliorant l'intelligibilité ainsi que (b) un projet, actuellement en cours, de comparaison de plusieurs formalismes de représentation de connaissance pour la représentation du contenu.
XML, RDF, Schéma, DSD, Représentation de connaissance par objets, Sémantique
Jérôme Euzenat, Towards formal knowledge intelligibility at the semiotic level, in: Proc. ECAI workshop on applied semiotics: control problems, Berlin (DE), pp59-61, 2000
Exchange of formalised knowledge through computers is developing fast. It is assumed that using knowledge will increase the efficiency of the systems by providing a better understanding of exchanged information. However, intelligibility is by no way ensured by the use of a semantically defined language. This statement of interest explains why and calls for the involvement of the semioticians for tackling this problem.
Semantics, Interoperability, Intelligibility, Computational semiotics
Jérôme Euzenat, Towards a principled approach to semantic interoperability, in: Asunción Gómez Pérez, Michael Gruninger, Heiner Stuckenschmidt, Michael Uschold (eds), Proc. IJCAI workshop on ontology and information sharing, Seattle (WA US), pp19-25, 2001
Semantic interoperability is the faculty of interpreting knowledge imported from other languages at the semantic level, i.e. to ascribe to each imported piece of knowledge the correct interpretation or set of models. It is a very important requirement for delivering a worldwide semantic web. This paper presents preliminary investigations towards developing a unified view of the problem. It proposes a definition of semantic interoperability based on model theory and shows how it applies to already existing works in the domain. Then, new applications of this definition to family of languages, ontology patterns and explicit description of semantics are presented.
Semantic interoperability, ontology sharing, knowledge transformation, ontology patterns
Jérôme Euzenat, L'annotation formelle de documents en huit (8) questions, in: Jean Charlet (éd), Actes 6e journées sur ingénierie des connaissances (IC), Grenoble (FR), pp95-110, 2001
Annoter un ensemble de documents informels à l'aide de représentations formelles appelle plusieurs questions qui doivent trouver une réponse si l'on veut développer un système cohérent. Ces questions sont liées à la forme et à l'objet des représentations retenues, à la nécessité d'utiliser de la connaissance indépendante du contenu des documents (ontologies, connaissance de contexte) et au statut du système résultant (grande base de connaissance ou éléments de connaissance distribués). Ces questions sont décrites et illustrées par la tentative d'annotation de résumés d'articles en génétique moléculaire.
Web sémantique, recherche par le contenu, annotation formelle, représentation du contenu, ontologie, connaissance de contexte
Jérôme Euzenat, Laurent Tardif, XML transformation flow processing, in: Proc. 2nd conference on extreme markup languages, Montréal (CA), pp61-72, 2001
The XSLT language is both complex to use in simple cases (like tag renaming or element hiding) and restricted in complex ones (requiring the processing of multiple stylesheets with complex information flows). We propose a framework improving on XSLT. It provides simple-to-use and easy-to-analyze macros for the basic common transformation tasks. It provides a superstructure for composing multiple stylesheets, with multiple input and output documents, in ways that are not accessible within XSLT. Having the whole transformation description in an integrated format allows to control and to analyze the complete transformation.
XML, XSLT, Transmorpher, Transformations
Jérôme Euzenat, Preserving modularity in XML encoding of description logics, in: Deborah McGuinness, Peter Patel-Schneider, Carole Goble, Ralph Möller (eds), Proc. 14th workshop on description logics (DL), Stanford (CA US), pp20-29, 2001
Description logics have been designed and studied in a modular way. This has allowed a methodic approach to complexity evaluation. We present a way to preserve this modularity in encoding description logics in XML and show how it can be used for building modular transformations and assembling them easily.
Jérôme Euzenat, An infrastructure for formally ensuring interoperability in a heterogeneous semantic web, in: Proc. 1st conference on semantic web working symposium (SWWS), Stanford (CA US), pp345-360, 2001
Because different applications and different communities require different features, the semantic web might have to face the heterogeneity of the languages for expressing knowledge. Yet, it will be necessary for many applications to use knowledge coming from different sources. In such a context, ensuring the correct understanding of imported knowledge on a semantic ground is very important. We present here an infrastructure based on the notions of transformations from one language to another and of properties satisfied by transformations. We show, in the particular context of semantic properties and description logics markup language, how it is possible (1) to define properties of transformations, (2) to express, in a form easily processed by machine, the proof of a property and (3) to construct by composition a proof of properties satisfied by compound transformations. All these functions are based on extensions of current web standard languages.
XML, XSLT, OMDoc, MathML, DLML, XSLT, Transmorpher, Transformations, proof
Heiner Stuckenschmidt, Jérôme Euzenat, Ontology Language Integration: A Constructive Approach, in: Proc. KI workshop on Applications of Description Logics, Wien (AT), 2001
The problem of integrating different ontology languages has become of special interest recently, especially in the context of semantic web applications. In the paper, we present an approach that is based on the configuration of a joint language all other languages can be translated into. We use description logics as a basis for constructing this common language taking advantage of the modular character and the availability of profound theoretical results in this area. We give the central definitions and exemplify the approach using example ontologies available on the Web.
Rim Al-Hulou, Olivier Corby, Rose Dieng-Kuntz, Jérôme Euzenat, Carolina Medina Ramirez, Amedeo Napoli, Raphaël Troncy, Three knowledge representation formalisms for content-based representation of documents, in: Proc. KR workshop on Formal ontology, knowledge representation and intelligent systems for the world wide web (SemWeb), Toulouse (FR), 2002
Documents accessible from the web or from any document base constitute a significant source of knowledge as soon as the document contents can be represented in an appropriate form. This paper presents the ESCRIRE project, whose objective is to compare three knowledge representation (KR) formalisms, namely conceptual graphs, description logics and objects, for representing and manipulating document contents. The comparison relies on the definition of a pivot language based on XML, allowing the design of a domain ontology, document annotations and queries. Each element has a corresponding translation in each KR formalism, that is used for inferencing and answering queries. In this paper, the principles on which relies the ESCRIRE project and the first results from this original experiment are described. An analysis of problems encountered, advantages and drawbacks of each formalism are studied with the emphasis put on the ontology-based annotations of document contents and on the query answering capabilities.
Jérôme Euzenat, Heiner Stuckenschmidt, The `family of languages' approach to semantic interoperability, in: Borys Omelayenko, Michel Klein (eds), Proc. ECAI workshop on Knowledge Transformation for the Semantic Web, Lyon (FR), pp92-99, 2002
Exchanging knowledge via the web might lead to the use of different representation languages because different applications could take advantage of this knowledge. In order to function properly, the interoperability of these languages must be established on a semantic ground (i.e., based on the models of the representations). Several solutions can be used for ensuring this interoperability. We present a new approach based on a set of knowledge representation languages partially ordered with regard to the transformability from one language to another by preserving a particular property. The advantages of the family of languages approach are the opportunity to choose the language in which a representation will be imported and the possibility to compose the transformations available between the members of the family. For the same set of languages, there can be several structures depending on the property used for structuring the family. We focus here on semantic properties of different strength that allow us to perform practicable but well founded transformations.
Semantic interoperability, ontology sharing, knowledge transformation, ontology patterns
Jérôme Euzenat, Nabil Layaïda, Victor Dias, A semantic framework for multimedia document adaptation, in: Proc. 18th International Joint Conference on Artificial Intelligence (IJCAI), Acapulco (MX), pp31-36, 2003
With the proliferation of heterogeneous devices (desktop computers, personal digital assistants, phones), multimedia documents must be played under various constraints (small screens, low bandwidth). Taking these constraints into account with current document models is impossible. Hence, generic source documents must be transformed into documents compatible with the target contexts. Currently, the design of transformations is left to programmers. We propose here a semantic framework, which accounts for multimedia document adaptation in very general terms. A model of a multimedia document is a potential execution of this document and a context defines a particular class of models. The adaptation should then retain the source document models that belong to the class defined by the context if such models exist. Otherwise, the adaptation should produce a document whose models belong to this class and are ``close'' to those of the source documents. We focus on the temporal dimension of multimedia documents and show how adaptation can take advantage of temporal reasoning techniques. Several metrics are given for assessing the proximity of models.
[Freksa 1996] reference is [Freksa 1992]p35, top item is "di"p35, exchange "si" and "fi"
Jérôme Euzenat, De la sémantique formelle à une approche computationelle de l'interprétation, in: Actes journéesAS 'Web sémantique' CNRS sur Web sémantique et sciences de l'homme et de la société, Ivry-sur-Seine (FR), 2003
Jérôme Euzenat, Petko Valtchev, An integrative proximity measure for ontology alignment, in: Proc. ISWC workshop on semantic information integration, Sanibel Island (FL US), pp33-38, 2003
Integrating heterogeneous resources of the web will require finding agreement between the underlying ontologies. A variety of methods from the literature may be used for this task, basically they perform pair-wise comparison of entities from each of the ontologies and select the most similar pairs. We introduce a similarity measure that takes advantage of most of the features of OWL-Lite ontologies and integrates many ontology comparison techniques in a common framework. Moreover, we put forth a computation technique to deal with one-to-many relations and circularities in the similarity definitions.
Jérôme Euzenat, Towards composing and benchmarking ontology alignments, in: Proc. ISWC workshop on semantic information integration, Sanibel Island (FL US), pp165-166, 2003
Jérôme Euzenat, Chouette un langage d'ontologies pour le web!, in: Actes 6e journées sur ingénierie des connaissances (IC), Lyon (FR), 2004
Jérôme Euzenat, Petko Valtchev, Similarity-based ontology alignment in OWL-Lite, in: Ramon López de Mantaras, Lorenza Saitta (eds), Proc. 16th european conference on artificial intelligence (ECAI), Valencia (ES), pp333-337, 2004
Interoperability of heterogeneous systems on the Web will be admittedly achieved through an agreement between the underlying ontologies. However, the richer the ontology description language, the more complex the agreement process, and hence the more sophisticated the required tools. Among current ontology alignment paradigms, similarity-based approaches are both powerful and flexible enough for aligning ontologies expressed in languages like OWL. We define a universal measure for comparing the entities of two ontologies that is based on a simple and homogeneous comparison principle: Similarity depends on the type of entity and involves all the features that make its definition (such as superclasses, properties, instances, etc.). One-to-many relationships and circularity in entity descriptions constitute the key difficulties in this context: These are dealt with through local matching of entity sets and iterative computation of recursively dependent similarities, respectively.
Jérôme Euzenat, David Loup, Mohamed Touzani, Petko Valtchev, Ontology alignment with OLA, in: York Sure, Óscar Corcho, Jérôme Euzenat, Todd Hughes (eds), Proc. 3rd ISWC2004 workshop on Evaluation of Ontology-based tools (EON), Hiroshima (JP), pp59-68, 2004
Using ontologies is the standard way to achieve interoperability of heterogeneous systems within the Semantic web. However, as the ontologies underlying two systems are not necessarily compatible, they may in turn need to be aligned. Similarity-based approaches to alignment seems to be both powerful and flexible enough to match the expressive power of languages like OWL. We present an alignment tool that follows the similarity-based paradigm, called OLA. OLA relies on a universal measure for comparing the entities of two ontologies that combines in a homogeneous way the entire amount of knowledge used in entity descriptions. The measure is computed by an iterative fixed-point-bound process producing subsequent approximations of the target solution. The alignments produce by OLA on the contest ontology pairs and the way they relate to the expected alignments is discussed and some preliminary conclusions about the relevance of the similarity-based approach as well as about the experimental settings of the contest are drawn.
Jérôme Euzenat, An API for ontology alignment, in: Proc. 3rd conference on international semantic web conference (ISWC), Hiroshima (JP), (Frank van Harmelen, Sheila McIlraith, Dimitris Plexousakis (eds), The semantic web, Lecture notes in computer science 3298, 2004), pp698-712, 2004
Ontologies are seen as the solution to data heterogeneity on the web. However, the available ontologies are themselves source of heterogeneity. This can be overcome by aligning ontologies, or finding the correspondence between their components. These alignments deserve to be treated as objects: they can be referenced on the web as such, be completed by an algorithm that improves a particular alignment, be compared with other alignments and be transformed into a set of axioms or a translation program. We present here a format for expressing alignments in RDF, so that they can be published on the web. Then we propose an implementation of this format as an Alignment API, which can be seen as an extension of the OWL API and shares some design goals with it. We show how this API can be used for effectively aligning ontologies and completing partial alignments, thresholding alignments or generating axioms and transformations.
Jérôme Euzenat, Introduction to the EON Ontology alignment contest, in: York Sure, Óscar Corcho, Jérôme Euzenat, Todd Hughes (eds), Proc. 3rd ISWC2004 workshop on Evaluation of Ontology-based tools (EON), Hiroshima (JP), pp47-50, 2004
Jérôme Euzenat, Dieter Fensel, Asunción Gómez Pérez, Rubén Lara, Knowledge web: realising the semantic web... all the way to knowledge-enhanced multimedia documents, in: Paola Hobson, Ebroul Izquierdo, Yiannis Kompatsiaris, Noel O'Connor (eds), Proc. European workshop on Integration of knowledge, semantic and digital media technologies, London (UK), pp343-350, 2004
The semantic web and semantic web services are major efforts in order to spread and to integrate knowledge technology to the whole web. The Knowledge Web network of excellence aims at supporting their developments at the best and largest European level and supporting industry in adopting them. It especially investigates the solution of scalability, heterogeneity and dynamics obstacles to the full development of the semantic web. We explain how Knowledge Web results should benefit knowledge-enhanced multimedia applications.
Faisal Alkhateeb, Jean-François Baget, Jérôme Euzenat, Complex path queries for RDF graphs, in: Proc. ISWC poster session, Galway (IE), ppPID-52, 2005
Marc Ehrig, Jérôme Euzenat, Relaxed precision and recall for ontology matching, in: Benjamin Ashpole, Jérôme Euzenat, Marc Ehrig, Heiner Stuckenschmidt (eds), Proc. K-Cap workshop on integrating ontology, Banff (CA), pp25-32, 2005
In order to evaluate the performance of ontology matching algorithms it is necessary to confront them with test ontologies and to compare the results. The most prominent criteria are precision and recall originating from information retrieval. However, it can happen that an alignment be very close to the expected result and another quite remote from it, and they both share the same precision and recall. This is due to the inability of precision and recall to measure the closeness of the results. To overcome this problem, we present a framework for generalizing precision and recall. This framework is instantiated by three different measures and we show in a motivating example that the proposed measures are prone to solve the problem of rigidity of classical precision and recall.
In the definition of recall-oriented proximity (Table 7, 'relaxed recall based on relation', §4.4.2), the minimum (0) and maximum values (1) are inverted. This problem was independently identified by Jérôme David and Daniel Faria.
Marc Ehrig, Jérôme Euzenat, Generalizing precision and recall for evaluating ontology matching, in: Proc. 4th ISWC poster session, Galway (IE), ppPID-54, 2005
We observe that the precision and recall measures are not able to discriminate between very bad and slightly out of target alignments. We propose to generalise these measures by determining the distance between the obtained alignment and the expected one. This generalisation is done so that precision and recall results are at least preserved. In addition, the measures keep some tolerance to errors, i.e., accounting for some correspondences that are close to the target instead of out of target.
Jérôme Euzenat, Evaluating ontology alignment methods, in: Proc. Dagstuhl seminar on Semantic interoperability and integration, Wadern (DE), (Yannis Kalfoglou, Marco Schorlemmer, Amit Sheth, Steffen Staab, Michael Uschold (eds), Semantic interoperability and integration, Dagstuhl seminar proceedings(04391), 2005), 2005
Many different methods have been designed for aligning ontologies. These methods use such different techniques that they can hardly be compared theoretically. Hence, it is necessary to compare them on common tests. We present two initiatives that led to the definition and the performance of the evaluation of ontology alignments during 2004. We draw lessons from these two experiments and discuss future improvements.
Jérôme Euzenat, Heiner Stuckenschmidt, Mikalai Yatskevich, Introduction to the Ontology Alignment Evaluation 2005, in: Benjamin Ashpole, Jérôme Euzenat, Marc Ehrig, Heiner Stuckenschmidt (eds), Proc. K-Cap workshop on integrating ontology, Banff (ALB CA), pp61-71, 2005
Jérôme Euzenat, Philippe Guégan, Petko Valtchev, OLA in the OAEI 2005 alignment contest, in: Benjamin Ashpole, Jérôme Euzenat, Marc Ehrig, Heiner Stuckenschmidt (eds), Proc. K-Cap workshop on integrating ontology, Banff (CA), pp97-102, 2005
Among the variety of alignment approaches (e.g., using machine learning, subsumption computation, formal concept analysis, etc.) similarity-based ones rely on a quantitative assessment of pair-wise likeness between entities. Our own alignment tool, OLA, features a similarity model rooted in principles such as: completeness on the ontology language features, weighting of different feature contributions and mutual influence between related ontology entities. The resulting similarities are recursively defined hence their values are calculated by a step-wise, fixed-point-bound approximation process. For the OAEI 2005 contest, OLA was provided with an additional mechanism for weight determination that increases the autonomy of the system.
Jérôme Euzenat, Alignment infrastructure for ontology mediation and other applications, in: Martin Hepp, Axel Polleres, Frank van Harmelen, Michael Genesereth (eds), Proc. 1st ICSOC international workshop on Mediation in semantic web services, Amsterdam (NL), pp81-95, 2005
Sébastien Laborie, Jérôme Euzenat, Nabil Layaïda, Adapter temporellement un document SMIL, in: Actes atelierplate-forme AFIA 2005 sur Connaissance et document temporel, Nice (FR), pp47-58, 2005
Les récentes avancées technologiques permettent aux documents multimédia d'être présentés sur de nombreuses plates-formes (ordinateurs de bureau, PDA, téléphones portables...). Cette diversification des supports a entraîné un besoin d'adaptation des documents à leur contexte d'exécution. Dans [Euzenat2003b], une approche sémantique d'adaptation de documents multimédia a été proposée et temporellement définie à l'aide de l'algèbre d'intervalles d'Allen. Cet article étend ces précédents travaux en les appliquant au langage de spécification de documents multimédia SMIL. Pour cela, des fonctions de traduction de SMIL vers l'algèbre de Allen (et inversement) ont été définies. Celles-ci préservent la proximité entre le document adapté et le document initial. Enfin, ces fonctions ont été articulées avec [Euzenat2003b].
Adaptation sémantique, Documents multimédia SMIL
Jérôme Euzenat, Jérôme Pierson, Fano Ramparany, Gestion dynamique de contexte pour l'informatique pervasive, in: Actes 15e conférenceAFIA-AFRIF sur reconnaissance des formes et intelligence artificielle (RFIA), Tours (FR), pp113, 2006
L'informatique pervasive a pour but d'offrir des services fondés sur la possibilité pour les humains d'interagir avec leur environnement (y compris les objets et autres humains qui l'occupent). Les applications dans ce domaine doivent être capable de considérer le contexte dans lequel les utilisateurs évoluent (qu'il s'agisse de leur localisation physique, leur position sociale ou hiérarchique ou leurs tâches courantes ainsi que des informations qui y sont liées). Ces applications doivent gérer dynamiquement l'irruption dans la scène de nouveaux éléments (utilisateurs ou appareils) même inconnus et produire de l'information de contexte utile à des applications non envisagées. Après avoir examiné les différents modèles de contexte étudiés en intelligence artificielle et en informatique pervasive, nous montrons en quoi ils ne répondent pas directement à ces besoins dynamiques. Nous décrivons une architecture dans laquelle les informations de contexte sont distribuées dans l'environnement et où les gestionnaires de contexte utilisent les technologies développées pour le web sémantique afin d'identifier et de caractériser les ressources disponibles. L'information de contexte est exprimée en RDF et décrite par des ontologies en OWL. Les dispositifs de l'environnement maintiennent leur propre contexte et peuvent communiquer cette information à d'autres dispositifs. Ils obéissent à un protocole simple permettant de les identifier et de déterminer quelles informations ils sont susceptibles d'apporter. Nous montrons en quoi une telle architecture permet d'ajouter de nouveaux dispositifs et de nouvelles applications sans interrompre ce qui fonctionne. En particulier, l'ouverture des langages de description d'ontologies permettent d'étendre les descriptions et l'alignement des ontologies permet de considérer des ontologies indépendantes.
Jérôme Euzenat, Jérôme Pierson, Fano Ramparany, A context information manager for pervasive environments, in: Proc. 2nd ECAI workshop on contexts and ontologies (C&O), Riva del Garda (IT), pp25-29, 2006
In a pervasive computing environment, heterogeneous devices need to communicate in order to provide services adapted to the situation of users. So, they need to assess this situation as their context. We have developed an extensible context model using semantic web technologies and a context information management component that enable the interaction between context information producer devices and context information consumer devices and as well as their insertion in an open environment.
Jérôme Euzenat, Jérôme Pierson, Fano Ramparany, A context information manager for dynamic environments, in: Proc. 4th international conference on pervasive computing poster session, Dublin (EI), (Tom Pfeifer, Albrecht Schmidt, Woontack Woo, Gavin Doherty, Frédéric Vernier, Kieran Delaney, Bill Yerazunis, Matthew Chalmers, Joe Kiniry (eds), Advances in pervasive computing, Technical report 207, Österreichische computer geselschaft, Wien (OS), 2006), pp79-83, 2006
In a pervasive environment, heterogeneous devices need to communicate in order to provide services adapted to users. We have developed an extensible context model using semantic web technologies and a context information management component that enable the interaction between context information producer devices and context information consumer devices and as well as their insertion in an open environment.
Jérôme Euzenat, Malgorzata Mochol, Pavel Shvaiko, Heiner Stuckenschmidt, Ondřej Sváb, Vojtech Svátek, Willem Robert van Hage, Mikalai Yatskevich, Results of the Ontology Alignment Evaluation Initiative 2006, in: Pavel Shvaiko, Jérôme Euzenat, Natalya Noy, Heiner Stuckenschmidt, Richard Benjamins, Michael Uschold (eds), Proc. 1st ISWC 2006 international workshop on ontology matching (OM), Athens (GA US), pp73-95, (5 November ) 2006
We present the Ontology Alignment Evaluation Initiative 2006 campaign as well as its results. The OAEI campaign aims at comparing ontology matching systems on precisely defined test sets. OAEI-2006 built over previous campaigns by having 6 tracks followed by 10 participants. It shows clear improvements over previous results. The final and official results of the campaign are those published on the OAEI web site.
Jason Jung, Jérôme Euzenat, From Personal Ontologies to Socialized Semantic Space, in: Proc. 3rd ESWC poster session, Budva (ME), 2006
We have designed a three-layered model which involves the networks between people, the ontologies they use, and the concepts occurring in these ontologies. We propose how relationships in one network can be extracted from relationships in another one based on analysis techniques relying on this network specificity. For instance, similarity in the ontology layer can be extracted from a similarity measure on the concept layer.
Jason Jung, Jérôme Euzenat, Measuring semantic centrality based on building consensual ontology on social network, in: Proc. 2nd ESWS workshop on semantic network analysis (SNA), Budva (ME), pp27-39, 2006
We have been focusing on three-layered socialized semantic space, consisting of social, ontology, and concept layers. In this paper, we propose a new measurement of semantic centrality of people, meaning the power of semantic bridging, on this architecture. Thereby, the consensual ontologies are discovered by semantic alignment-based mining process in the ontology and concept layer. It is represented as the maximal semantic substructures among personal ontologies of semantically interlinked community. Finally, we have shown an example of semantic centrality applied to resource annotation on social network, and discussed our assumptions used in formulation of this measurement.
Sébastien Laborie, Jérôme Euzenat, Nabil Layaïda, Adaptation spatiale efficace de documents SMIL, in: Actes 15e conférenceAFIA-AFRIF sur reconnaissance des formes et intelligence artificielle (RFIA), Tours (FR), pp127, 2006
La multiplication des supports de présentation multimédia entraîne un besoin d'adaptation des documents à leur contexte d'exécution. Nous avons proposé une approche sémantique d'adaptation de documents multimédia qui a été temporellement définie à l'aide de l'algèbre d'intervalles d'Allen. Cet article étend ces précédents travaux à la dimension spatiale des documents SMIL. Notre objectif est de trouver une représentation spatiale qualitative permettant de calculer un ensemble de solutions d'adaptation proche du document initial. La qualité d'une adaptation se mesure à deux niveaux: expressivité des solutions d'adaptation et rapidité de calcul. Dans ce contexte, nous caractérisons la qualité de l'adaptation selon plusieurs types de représentations spatiales existantes. Nous montrons que ces représentations ne permettent pas d'avoir une qualité d'adaptation optimale. Nous proposons alors une nouvelle représentation spatiale suffisament expressive permettant d'adapter rapidement des documents multimédia SMIL.
Adaptation sémantique, Documents multimédia SMIL
Sébastien Laborie, Jérôme Euzenat, Nabil Layaïda, A spatial algebra for multimedia document adaptation, in: Yannis Avrithis, Yiannis Kompatsiaris, Steffen Staab, Noel O'Connor (eds), Proc. 1st International Conference on Semantic and Digital Media Technologies poster session (SAMT), Athens (GR), pp7-8, 2006
The multiplication of execution contexts for multimedia documents requires the adaptation of document specifications. This paper instantiates our previous semantic approach for multimedia document adaptation to the spatial dimension of multimedia documents. Our goal is to find a qualitative spatial representation that computes, in a reasonable time, a set of adaptation solutions close to the initial document satisfying a profile. The quality of an adaptation can be regarded in two respects: expressiveness of adaptation solutions and computation speed. In this context, we propose a new spatial representation sufficiently expressive to adapt multimedia documents faster.
Adaptation sémantique, Documents multimédia SMIL
Sébastien Laborie, Jérôme Euzenat, Adapting the hypermedia structure in a generic multimedia adaptation framework, in: Phivos Mylonas, Manolis Wallace, Marios Angelides (eds), Proc. 1st International Workshop on Semantic Media Adaptation and Personalization (SMAP), Athens (GR), pp62-67, 2006
The multiplication of execution contexts for multimedia documents requires the adaptation of document specifications. We proposed a semantic approach for multimedia document adaptation. This paper extends this framework to the hypermedia dimension of multimedia documents, i.e., hypermedia links between multimedia objects. By considering hypermedia links as particular objects of the document, it is possible to adapt the hypermedia dimension with other dimensions like the temporal one. However, due to the hypermedia structure, several specifications have to be considered. Thus, to preserve our adaptation framework, we propose a first straightforward strategy that consists of adapting all specifications generated by the hypermedia structure. However, we show that this one has several drawbacks, e.g., its high computational costs. Hence, we propose to adapt document specifications step by step according to the user interactions.
Adaptation sémantique, Documents multimédia SMIL
Sébastien Laborie, Jérôme Euzenat, Nabil Layaïda, Adaptation sémantique de documents SMIL, in: Actes journées de travail interdisciplinaire sur autour des documents structurés, Giens (FR), pp1-5, 2006
Adaptation sémantique, Documents multimédia SMIL
Loredana Laera, Valentina Tamma, Trevor Bench-Capon, Jérôme Euzenat, Agent-based argumentation for ontology alignments, in: Proc. 6th ECAI workshop on Computational models of natural argument (CMNA), Riva del Garda (IT), pp40-46, 2006
When agents communicate they do not necessarily use the same vocabulary or ontology. For them to interact successfully they must find correspondences between the terms used in their ontologies. While many proposals for matching two agent ontologies have been presented in the literature, the resulting alignment may not be satisfactory to both agents and can become the object of further negotiation between them. This paper describes our work constructing a formal framework for reaching agents' consensus on the terminology they use to communicate. In order to accomplish this, we adapt argument-based negotiation used in multi-agent systems to deal specifically with arguments that support or oppose candidate correspondences between ontologies. Each agent can decide according to its interests whether to accept or refuse the candidate correspondence. The proposed framework considers arguments and propositions that are specific to the matching task and related to the ontology semantics. This argumentation framework relies on a formal argument manipulation schema and on an encoding of the agents preferences between particular kinds of arguments. The former does not vary between agents, whereas the latter depends on the interests of each agent. Therefore, this work distinguishes clearly between the alignment rationales valid for all agents and those specific to a particular agent.
Loredana Laera, Valentina Tamma, Jérôme Euzenat, Trevor Bench-Capon, Terry Payne, Reaching agreement over ontology alignments, in: Proc. 5th conference on International semantic web conference (ISWC), Athens (GA US), (Isabel Cruz, Stefan Decker, Dean Allemang, Chris Preist, Daniel Schwabe, Peter Mika, Michael Uschold, Lora Aroyo (eds), The semantic web - ISWC 2006 (Proc. 5th conference on International semantic web conference (ISWC)), Lecture notes in computer science 4273, 2006), pp371-384, 2006
When agents communicate, they do not necessarily use the same vocabulary or ontology. For them to interact successfully, they must find correspondences (mappings) between the terms used in their respective ontologies. While many proposals for matching two agent ontologies have been presented in the literature, the resulting alignment may not be satisfactory to both agents, and thus may necessitate additional negotiation to identify a mutually agreeable set of correspondences. We propose an approach for supporting the creation and exchange of different arguments, that support or reject possible correspondences. Each agent can decide, according to its preferences, whether to accept or refuse a candidate correspondence. The proposed framework considers arguments and propositions that are specific to the matching task and are based on the ontology semantics. This argumentation framework relies on a formal argument manipulation schema and on an encoding of the agents' preferences between particular kinds of arguments. Whilst the former does not vary between agents, the latter depends on the interests of each agent. Thus, this approach distinguishes clearly between alignment rationales which are valid for all agents and those specific to a particular agent.
Loredana Laera, Valentina Tamma, Jérôme Euzenat, Trevor Bench-Capon, Terry Payne, Arguing over ontology alignments, in: Proc. 1st ISWC 2006 international workshop on ontology matching (OM), Athens (GA US), pp49-60, 2006
In open and dynamic environments, agents will usually differ in the domain ontologies they commit to and their perception of the world. The availability of Alignment Services, that are able to provide correspondences between two ontologies, is only a partial solution to achieving interoperability between agents, because any given candidate set of alignments is only suitable in certain contexts. For a given context, different agents might have different and inconsistent perspectives that reflect their differing interests and preferences on the acceptability of candidate mappings, each of which may be rationally acceptable. In this paper we introduce an argumentation-based negotiation framework over the terminology they use in order to communicate. This argumentation framework relies on a formal argument manipulation schema and on an encoding of the agents preferences between particular kinds of arguments. The former does not vary between agents, whereas the latter depends on the interests of each agent. Thus, this approach distinguishes clearly between the alignment rationales valid for all agents and those specific to a particular agent.
Malgorzata Mochol, Anja Jentzsch, Jérôme Euzenat, Applying an analytic method for matching approach selection, in: Proc. 1st ISWC 2006 international workshop on ontology matching (OM), Athens (GA US), pp37-48, 2006
One of the main open issues in the ontology matching field is the selection of a current relevant and suitable matcher. The suitability of the given approaches is determined w.r.t the requirements of the application and with careful consideration of a number of factors. This work proposes a multilevel characteristic for matching approaches, which provides a basis for the comparison of different matchers and is used in the decision making process for selection the most appropriate algorithm.
Antoine Zimmermann, Markus Krötzsch, Jérôme Euzenat, Pascal Hitzler, Formalizing ontology alignment and its operations with category theory, in: Proc. 4th International conference on Formal ontology in information systems (FOIS), Baltimore (ML US), (Brandon Bennett, Christiane Fellbaum (eds), Proc. 4th International conference on Formal ontology in information systems (FOIS), Baltimore (ML US), IOS Press, Amsterdam (NL), 2006), pp277-288, 2006
An ontology alignment is the expression of relations between different ontologies. In order to view alignments independently from the language expressing ontologies and from the techniques used for finding the alignments, we use a category-theoretical model in which ontologies are the objects. We introduce a categorical structure, called V-alignment, made of a pair of morphisms with a common domain having the ontologies as codomain. This structure serves to design an algebra that describes formally what are ontology merging, alignment composition, union and intersection using categorical constructions. This enables combining alignments of various provenance. Although the desirable properties of this algebra make such abstract manipulation of V-alignments very simple, it is practically not well fitted for expressing complex alignments: expressing subsumption between entities of two different ontologies demands the definition of non-standard categories of ontologies. We consider two approaches to solve this problem. The first one extends the notion of V-alignments to a more complex structure called W-alignments: a formalization of alignments relying on "bridge axioms". The second one relies on an elaborate concrete category of ontologies that offers high expressive power. We show that these two extensions have different advantages that may be exploited in different contexts (viz., merging, composing, joining or meeting): the first one efficiently processes ontology merging thanks to the possible use of categorical institution theory, while the second one benefits from the simplicity of the algebra of V-alignments.
Antoine Zimmermann, Jérôme Euzenat, Three semantics for distributed systems and their relations with alignment composition, in: Proc. 5th conference on International semantic web conference (ISWC), Athens (GA US), (Isabel Cruz, Stefan Decker, Dean Allemang, Chris Preist, Daniel Schwabe, Peter Mika, Michael Uschold, Lora Aroyo (eds), The semantic web - ISWC 2006 (Proc. 5th conference on International semantic web conference (ISWC)), Lecture notes in computer science 4273, 2006), pp16-29, 2006
An ontology alignment explicitly describes the relations holding between two ontologies. A system composed of ontologies and alignments interconnecting them is herein called a distributed system. We give three different semantics of a distributed system, that do not interfere with the semantics of ontologies. Their advantages are compared, with respect to allowing consistent merge of ontologies, managing heterogeneity and complying with an alignment composition operation. We show that only the two first variants, which differ from other proposed semantics, can offer a sound composition operation.
Jean-François Djoufak-Kengue, Jérôme Euzenat, Petko Valtchev, OLA in the OAEI 2007 evaluation contest, in: Pavel Shvaiko, Jérôme Euzenat, Fausto Giunchiglia, Bin He (eds), Proc. 2nd ISWC workshop on ontology matching (OM), Busan (KR), pp188-195, 2007
Similarity has become a classical tool for ontology confrontation motivated by alignment, mapping or merging purposes. In the definition of an ontology-based measure one has the choice between covering a single facet (e.g., URIs, labels, instances of an entity, etc.), covering all of the facets or just a subset thereof. In our matching tool, OLA, we had opted for an integrated approach towards similarity, i.e., calculation of a unique score for all candidate pairs based on an aggregation of all facet-wise comparison results. Such a choice further requires effective means for the establishment of importance ratios for facets, or weights, as well as for extracting an alignment out of the ultimate similarity matrix. In previous editions of the competition OLA has relied on a graph representation of the ontologies to align, OL-graphs, that reflected faithfully the syntactic structure of the OWL descriptions. A pair of OL-graphs was exploited to form and solve a system of equations whose approximate solutions were taken as the similarity scores. OLA2 is a new version of OLA which comprises a less integrated yet more homogeneous graph representation that allows similarity to be expressed as graph matching and further computed through matrix multiplying. Although OLA2 lacks key optimization tools from the previous one, while a semantic grounding in the form of WORDNET engine is missing, its results in the competition, at least for the benchmark test suite, are perceivably better.
Jérôme Euzenat, Semantic precision and recall for ontology alignment evaluation, in: Proc. 20th International Joint Conference on Artificial Intelligence (IJCAI), Hyderabad (IN), pp348-353, 2007
In order to evaluate ontology matching algorithms it is necessary to confront them with test ontologies and to compare the results with some reference. The most prominent comparison criteria are precision and recall originating from information retrieval. Precision and recall are thought of as some degree of correction and completeness of results. However, when the objects to compare are semantically defined, like ontologies and alignments, it can happen that a fully correct alignment has low precision. This is due to the restricted set-theoretic foundation of these measures. Drawing on previous syntactic generalizations of precision and recall, semantically justified measures that satisfy maximal precision and maximal recall for correct and complete alignments is proposed. These new measures are compatible with classical precision and recall and can be computed.
The proposed measure was supposed to be syntactically neutral: that all semantically equivalent alignments would have the same result for the measure. This is not the case and it is possible to cheat the measure by adding redundancy. This problem is discussed in [david2008b]. Thanks to Jérôme David for identifying this mistake.
Jérôme Euzenat, Antoine Zimmermann, Frederico Freitas, Alignment-based modules for encapsulating ontologies, in: Bernardo Cuenca Grau, Vasant Honavar, Anne Schlicht, Frank Wolter (eds), Proc. 2nd workshop on Modular ontologies (WoMO), Whistler (BC CA), pp32-45, 2007
Ontology engineering on the web requires a well-defined ontology module system that allows sharing knowledge. This involves declaring modules that expose their content through an interface which hides the way concepts are modeled. We provide a straightforward syntax for such modules which is mainly based on ontology alignments. We show how to adapt a generic semantics of alignments so that it accounts for the hiding of non-exported elements, but honor the semantics of the encapsulated ontologies. The generality of this framework allows modules to be reused within different contexts built upon various logical formalisms.
ontology alignment, modular ontology, ontology engineering
Jérôme Euzenat, Antoine Isaac, Christian Meilicke, Pavel Shvaiko, Heiner Stuckenschmidt, Ondřej Sváb, Vojtech Svátek, Willem Robert van Hage, Mikalai Yatskevich, Results of the Ontology Alignment Evaluation Initiative 2007, in: Pavel Shvaiko, Jérôme Euzenat, Fausto Giunchiglia, Bin He (eds), Proc. 2nd ISWC 2007 international workshop on ontology matching (OM), Busan (KR), pp96-132, (11 November ) 2007
We present the Ontology Alignment Evaluation Initiative 2007 campaign as well as its results. The OAEI campaign aims at comparing ontology matching systems on precisely defined test sets. OAEI-2007 builds over previous campaigns by having 4 tracks with 7 test sets followed by 17 participants. This is a major increase in the number of participants compared to the previous years. Also, the evaluation results demonstrate that more participants are at the forefront. The final and official results of the campaign are those published on the OAEI web site.
Jason Jung, Jérôme Euzenat, Towards semantic social networks, in: Proc. 4th conference on European semantic web conference (ESWC), Innsbruck (AT), (Enrico Franconi, Michael Kifer, Wolfgang May (eds), The semantic web: research and applications (Proc. 4th conference on European semantic web conference (ESWC)), Lecture notes in computer science 4273, 2007), pp267-280, 2007
Computer manipulated social networks are usually built from the explicit assertion by users that they have some relation with other users or by the implicit evidence of such relations (e.g., co-authoring). However, since the goal of social network analysis is to help users to take advantage of these networks, it would be convenient to take more information into account. We introduce a three-layered model which involves the network between people (social network), the network between the ontologies they use (ontology network) and a network between concepts occurring in these ontologies. We explain how relationships in one network can be extracted from relationships in another one based on analysis techniques relying on this network specificity. For instance, similarity in the ontology network can be extracted from a similarity measure on the concept network. We illustrate the use of these tools for the emergence of consensus ontologies in the context of semantic peer-to-peer systems.
Jason Jung, Antoine Zimmermann, Jérôme Euzenat, Concept-based query transformation based on semantic centrality in semantic peer-to-peer environment, in: Proc. 9th Conference on Asia-Pacific web (APWeb), Huang Shan (CN), (Guozhu Dong, Xuemin Lin, Wei Wang, Yun Yang, Jeffrey Xu Yu (eds), Advances in data and web management (Proc. 9th Conference on Asia-Pacific web (APWeb)), Lecture notes in computer science 4505, 2007), pp622-629, 2007
Query transformation is a serious hurdle on semantic peer-to-peer environment. The problem is that the transformed queries might lose some information from the original one, as continuously traveling p2p networks. We mainly consider two factors; i) number of transformations and// ii) quality of ontology alignment. In this paper, we propose semantic centrality (SC) measurement meaning the power of semantic bridging on semantic p2p environment. Thereby, we want to build semantically cohesive user subgroups, and find out the best peers for query transformation, i.e., minimizing information loss. We have shown an example for retrieving image resources annotated on p2p environment by using query transformation based on SC.
Sébastien Laborie, Jérôme Euzenat, Nabil Layaïda, Multimedia document summarization based on a semantic adaptation framework, in: Proc. 1st international workshop on Semantically aware document processing and indexing (SADPI), Montpellier (FR), (Henri Betaille, Jean-Yves Delort, Peter King, Marie-Laure Mugnier, Jocelyne Nanard, Marc Nanard (eds), Proc. 1st international workshop on Semantically aware document processing and indexing (SADPI), Montpellier (FR), ACM Press, 2007), pp87-94, 2007
The multiplication of presentation contexts (such as mobile phones, PDAs) for multimedia documents requires the adaptation of document specifications. In an earlier work, a semantic framework for multimedia document adaptation was proposed. This framework deals with the semantics of the document composition by transforming the relations between multimedia objects. However, it was lacking the capability of suppressing multimedia objects. In this paper, we extend the proposed adaptation with this capability. Thanks to this extension, we present a method for summarizing multimedia documents. Moreover, when multimedia objects are removed, the resulted document satisfies some properties such as presentation contiguity. To validate our framework, we adapt standard multimedia documents such as SMIL documents.
Loredana Laera, Ian Blacoe, Valentina Tamma, Terry Payne, Jérôme Euzenat, Trevor Bench-Capon, Argumentation over Ontology Correspondences in MAS, in: Proc. 6th International conference on Autonomous Agents and Multiagent Systems (AAMAS), Honolulu (HA US), pp1285-1292, 2007
In order to support semantic interoperation in open environments, where agents can dynamically join or leave and no prior assumption can be made on the ontologies to align, the different agents involved need to agree on the semantics of the terms used during the interoperation. Reaching this agreement can only come through some sort of negotiation process. Indeed, agents will differ in the domain ontologies they commit to; and their perception of the world, and hence the choice of vocabulary used to represent concepts. We propose an approach for supporting the creation and exchange of different arguments, that support or reject possible correspondences. Each agent can decide, according to its preferences, whether to accept or refuse a candidate correspondence. The proposed framework considers arguments and propositions that are specific to the matching task and are based on the ontology semantics. This argumentation framework relies on a formal argument manipulation schema and on an encoding of the agents' preferences between particular kinds of arguments.
François Scharffe, Jérôme Euzenat, Ying Ding, Dieter Fensel, Correspondence patterns for ontology mediation, in: Proc. ISWC poster session, Busan (KR), pp89-90, 2007
Faisal Alkhateeb, Jean-François Baget, Jérôme Euzenat, Constrained regular expressions in SPARQL, in: Hamid Arabnia, Ashu Solo (eds), Proc. international conference on semantic web and web services (SWWS), Las Vegas (NV US), pp91-99, 2008
We have proposed an extension of SPARQL, called PSPARQL, to characterize paths of variable lengths in an RDF knowledge base (e.g. "Does there exists a trip from town A to town B?"). However, PSPARQL queries do not allow expressing constraints on internal nodes (e.g. "Moreover, one of the stops must provide a wireless access."). This paper proposes an extension of PSPARQL, called CPSPARQL, that allows expressing constraints on paths. For this extension, we provide an abstract syntax, semantics as well as a sound and complete inference mechanism for answering CPSPARQL queries.
Camila Bezerra, Frederico Freitas, Jérôme Euzenat, Antoine Zimmermann, ModOnto: A tool for modularizing ontologies, in: Proc. 3rd workshop on ontologies and their applications (Wonto), Salvador de Bahia (Bahia BR), (26 October ) 2008
During the last three years there has been growing interest and consequently active research on ontology modularization. This paper presents a concrete tool that incorporates an approach to ontology modularization that inherits some of the main principles from object-oriented software engineering, which are encapsulation and information hiding. What motivated us to track that direction is the fact that most ontology approaches to the problem focus on linking ontologies (or modules) rather than building modules that can encapsulate foreign parts of ontologies (or other modules) that can be managed more easily.
ontology, modularization, reuse, composition
Caterina Caraciolo, Jérôme Euzenat, Laura Hollink, Ryutaro Ichise, Antoine Isaac, Véronique Malaisé, Christian Meilicke, Juan Pane, Pavel Shvaiko, Heiner Stuckenschmidt, Ondřej Sváb, Vojtech Svátek, Results of the Ontology Alignment Evaluation Initiative 2008, in: Pavel Shvaiko, Jérôme Euzenat, Fausto Giunchiglia, Heiner Stuckenschmidt (eds), Proc. 3rd ISWC workshop on ontology matching (OM), Karlsruhe (DE), pp73-119, 2008
Ontology matching consists of finding correspondences between ontology entities. OAEI campaigns aim at comparing ontology matching systems on precisely defined test sets. Test sets can use ontologies of different nature (from expressive OWL ontologies to simple directories) and use different modalities, e.g., blind evaluation, open evaluation, consensus. OAEI-2008 builds over previous campaigns by having 4 tracks with 8 test sets followed by 13 participants. Following the trend of previous years, more participants reach the forefront. The official results of the campaign are those published on the OAEI web site.
Jérôme David, Jérôme Euzenat, Comparison between ontology distances (preliminary results), in: Proc. 7th conference on international semantic web conference (ISWC), Karlsruhe (DE), (Amit Sheth, Steffen Staab, Mike Dean, Massimo Paolucci, Diana Maynard, Timothy Finin, Krishnaprasad Thirunarayan (eds), The semantic web, Lecture notes in computer science 5318, 2008), pp245-260, 2008
There are many reasons for measuring a distance between ontologies. In particular, it is useful to know quickly if two ontologies are close or remote before deciding to match them. To that extent, a distance between ontologies must be quickly computable. We present constraints applying to such measures and several possible ontology distances. Then we evaluate experimentally some of them in order to assess their accuracy and speed.
Jérôme David, Jérôme Euzenat, On fixing semantic alignment evaluation measures, in: Pavel Shvaiko, Jérôme Euzenat, Fausto Giunchiglia, Heiner Stuckenschmidt (eds), Proc. 3rd ISWC workshop on ontology matching (OM), Karlsruhe (DE), pp25-36, 2008
The evaluation of ontology matching algorithms mainly consists of comparing a produced alignment with a reference one. Usually, this evaluation relies on the classical precision and recall measures. This evaluation model is not satisfactory since it does not take into account neither the closeness of correspondances, nor the semantics of alignments. A first solution consists of generalizing the precision and recall measures in order to solve the problem of rigidity of classical model. Another solution aims at taking advantage of the semantic of alignments in the evaluation. In this paper, we show and analyze the limits of these evaluation models. Given that measures values depend on the syntactic form of the alignment, we first propose an normalization of alignment. Then, we propose two new sets of evaluation measures. The first one is a semantic extension of relaxed precision and recall. The second one consists of bounding the alignment space to make ideal semantic precision and recall applicable.
Jean-François Djoufak-Kengue, Jérôme Euzenat, Petko Valtchev, Alignement d'ontologies dirigé par la structure, in: Actes 14e journées nationales sur langages et modèles à objets (LMO), Montréal (CA), pp43-57, 2008
L'alignement d'ontologies met en évidence les relations sémantiques entre les entités de deux ontologies à confronter. L'outil de choix pour l'alignement est une mesure de similarité sur les couples d'entités. Certaines méthodes d'alignement performantes font dépendre la similarité d'un couple de celles des couples voisins. La circularité dans les définitions résultantes est traitée par le calcul itératif d'un point fixe. Nous proposons un cadre unificateur, appelé alignement dirigé par la structure, qui permet de décrire ces méthodes en dépit de divergences d'ordre technique. Celui-ci combine l'appariement de graphes et le calcul matriciel. Nous présentons son application à la ré-implémentation de l'algorithme OLA, baptisée OLA2.
Jérôme Euzenat, Quelques pistes pour une distance entre ontologies, in: Marie-Aude Aufaure, Omar Boussaid, Pascale Kuntz (éds), Actes 1er atelierEGC 2008 sur similarité sémantique, Sophia-Antipolis (FR), pp51-66, 2008
Il y a plusieurs raisons pour lesquelles il est utile de mesurer une distance entre ontologies. En particulier, il est important de savoir rapidement si deux ontologies sont proches où éloignées afin de déterminer s'il est utile de les aligner ou non. Dans cette perspective, une distance entre ontologies doit pouvoir se calculer rapidement. Nous présentons les contraintes qui pèsent sur de telles mesures et nous explorons diverses manières d'établir de telles distances. Des mesures peuvent être fondées sur les ontologies elles-même, en particulier sur leurs caractéristiques terminologiques, structurelles, extensionnelles ou sémantiques; elles peuvent aussi être fondées sur des alignements préalables, en particulier sur l'existence ou la qualité de tels alignments. Comme on peut s'y attendre, il n'existe pas de distance possédant toutes les qualités désirées, mais une batterie de techniques qui méritent d'être expérimentées.
Jérôme Euzenat, François Scharffe, Axel Polleres, Processing ontology alignments with SPARQL (Position paper), in: Proc. IEEE international workshop on Ontology alignment and visualization (OAaV), Barcelona (ES), pp913-917, 2008
Solving problems raised by heterogeneous ontologies can be achieved by matching the ontologies and processing the resulting alignments. This is typical of data mediation in which the data must be translated from one knowledge source to another. We propose to solve the data translation problem, i.e. the processing part, using the SPARQL query language. Indeed, such a language is particularly adequate for extracting data from one ontology and, through its CONSTRUCT statement, for generating new data. We present examples of such transformations, but we also present a set of example correspondences illustrating the needs for particular representation constructs, such as aggregates, value-generating built-in functions and paths, which are missing from SPARQL. Hence, we advocate the use of two SPARQL extensions providing these missing features.
ontology alignment, semantic web, SPARQL, alignment grounding, alignment language, mapping language
Jérôme Euzenat, Algebras of ontology alignment relations, in: Proc. 7th conference on international semantic web conference (ISWC), Karlsruhe (DE), (Amit Sheth, Steffen Staab, Mike Dean, Massimo Paolucci, Diana Maynard, Timothy Finin, Krishnaprasad Thirunarayan (eds), The semantic web, Lecture notes in computer science 5318, 2008), pp387-402, 2008
Correspondences in ontology alignments relate two ontology entities with a relation. Typical relations are equivalence or subsumption. However, different systems may need different kinds of relations. We propose to use the concepts of algebra of relations in order to express the relations between ontology entities in a general way. We show the benefits in doing so in expressing disjunctive relations, merging alignments in different ways, amalgamating alignments with relations of different granularity, and composing alignments.
Sébastien Laborie, Jérôme Euzenat, Nabil Layaïda, Adaptation spatio-temporelle et hypermédia de documents multimédia, in: Actes atelier sur représentation et raisonnement sur le temps et l'espace (RTE), Montpellier (FR), pp1-13, 2008
Adaptation sémantique, Documents multimédia SMIL
François Scharffe, Jérôme Euzenat, Dieter Fensel, Towards design patterns for ontology alignment, in: Proc. 24th ACM symposium on applied computing (SAC), Fortaleza (BR), pp2321-2325, 2008
Aligning ontologies is a crucial and tedious task. Matching algorithms and tools provide support to facilitate the task of the user in defining correspondences between ontologies entities. However, automatic matching is actually limited to the detection of simple one to one correspondences to be further refined by the user. We introduce in this paper Correspondence Patterns as a tool to assist the design of ontology alignments. Based on existing research on patterns in the fields of software and ontology engineering, we define a pattern template and use it to develop a correspondence patterns library. This library is published in RDF following the Alignment Ontology vocabulary.
Pavel Shvaiko, Jérôme Euzenat, Ten challenges for ontology matching, in: Proc. 7th international conference on ontologies, databases, and applications of semantics (ODBASE), Monterey (MX), (Robert Meersman, Zahir Tari (eds), On the Move to Meaningful Internet Systems: OTM 2008, Lecture notes in computer science 5332, 2008), pp1163-1181, 2008
This paper aims at analyzing the key trends and challenges of the ontology matching field. The main motivation behind this work is the fact that despite many component matching solutions that have been developed so far, there is no integrated solution that is a clear success, which is robust enough to be the basis for future development, and which is usable by non expert users. In this paper we first provide the basics of ontology matching with the help of examples. Then, we present general trends of the field and discuss ten challenges for ontology matching, thereby aiming to direct research into the critical path and to facilitate progress of the field.
Camila Bezerra, Frederico Freitas, Jérôme Euzenat, Antoine Zimmermann, An approach for ontology modularization, in: Proc. Brazil/INRIA colloquium on computation: cooperations, advances and challenges (Colibri), Bento-Conçalves (BR), pp184-189, 2009
Ontology modularization could help overcome the problem of defining a fragment of an existing ontology to be reused, in order to enable ontology developers to include only those concepts and relations that are relevant for the application they are modeling an ontology for. This paper presents a concrete tool that incorporates an approach to ontology modularization that inherits some of the main principles from object-oriented softwareengineering, which are encapsulation and information hiding. What motivated us to track that direction is the fact that most ontology approaches to the problem focus on linking ontologies rather than building modules that can encapsulate foreign parts of ontologies (or other modules) that can be managed more easily.
Mathieu d'Aquin, Jérôme Euzenat, Chan Le Duc, Holger Lewen, Sharing and reusing aligned ontologies with cupboard, in: Proc. K-Cap poster session, Redondo Beach (CA US), pp179-180, 2009
This demo presents the Cupboard online system for sharing and reusing ontologies linked together with alignments, and that are attached to rich metadata and reviews.
Jérôme Euzenat, Alfio Ferrara, Laura Hollink, Antoine Isaac, Cliff Joslyn, Véronique Malaisé, Christian Meilicke, Andriy Nikolov, Juan Pane, Marta Sabou, François Scharffe, Pavel Shvaiko, Vassilis Spiliopoulos, Heiner Stuckenschmidt, Ondřej Sváb-Zamazal, Vojtech Svátek, Cássia Trojahn dos Santos, George Vouros, Shenghui Wang, Results of the Ontology Alignment Evaluation Initiative 2009, in: Pavel Shvaiko, Jérôme Euzenat, Fausto Giunchiglia, Heiner Stuckenschmidt, Natalya Noy, Arnon Rosenthal (eds), Proc. 4th ISWC workshop on ontology matching (OM), Chantilly (VA US), pp73-126, 2009
Ontology matching consists of finding correspondences between ontology entities. OAEI campaigns aim at comparing ontology matching systems on precisely defined test cases. Test cases can use ontologies of different nature (from expressive OWL ontologies to simple directories) and use different modalities, e.g., blind evaluation, open evaluation, consensus. OAEI-2009 builds over previous campaigns by having 5 tracks with 11 test cases followed by 16 participants. This paper is an overall presentation of the OAEI 2009 campaign.
Sébastien Laborie, Jérôme Euzenat, Nabil Layaïda, Semantic multimedia document adaptation with functional annotations, in: Proc. 4th international workshop on Semantic Media Adaptation and Personalization (SMAP2009), San Sebastián (ES), pp44-49, 2009
The diversity of presentation contexts (such as mobile phones, PDAs) for multimedia documents requires the adaptation of document specifications. In an earlier work, we have proposed a semantic adaptation framework for multimedia documents. This framework captures the semantics of the document composition and transforms the relations between multimedia objects according to adaptation constraints. In this paper, we show that capturing only the document composition for adaptation is unsatisfactory because it leads to a limited form of adapted solutions. Hence, we propose to guide adaptation with functional annotations, i.e., annotations related to multimedia objects which express a function in the document. In order to validate this framework, we propose to use RDF descriptions from SMIL documents and adapt such documents with our interactive adaptation prototype.
Jérôme David, Jérôme Euzenat, Ondřej Sváb-Zamazal, Ontology similarity in the alignment space, in: Proc. 9th conference on international semantic web conference (ISWC), Shanghai (CN), (Peter Patel-Schneider, Yue Pan, Pascal Hitzler, Peter Mika, Lei Zhang, Jeff Pan, Ian Horrocks, Birte Glimm (eds), The semantic web, Lecture notes in computer science 6496, 2010), pp129-144, 2010
Measuring similarity between ontologies can be very useful for different purposes, e.g., finding an ontology to replace another, or finding an ontology in which queries can be translated. Classical measures compute similarities or distances in an ontology space by directly comparing the content of ontologies. We introduce a new family of ontology measures computed in an alignment space: they evaluate the similarity between two ontologies with regard to the available alignments between them. We define two sets of such measures relying on the existence of a path between ontologies or on the ontology entities that are preserved by the alignments. The former accounts for known relations between ontologies, while the latter reflects the possibility to perform actions such as instance import or query translation. All these measures have been implemented in the OntoSim library, that has been used in experiments which showed that entity preserving measures are comparable to the best ontology space measures. Moreover, they showed a robust behaviour with respect to the alteration of the alignment space.
Jérôme David, Jérôme Euzenat, Linked data from your pocket: The Android RDFContentProvider, in: Proc. 9th demonstration track on international semantic web conference (ISWC), Shanghai (CN), pp129-132, 2010
Jérôme Euzenat, Philipp Cimiano, John Domingue, Siegfried Handschuh, Hannes Werthner, Personal infospheres, in: Proc. Dagstuhl seminar on Semantic web reflections and future directions, Wadern (DE), (John Domingue, Dieter Fensel, James Hendler, Rudi Studer (eds), Semantic web reflections and future directions, Dagstuhl seminar proceedings(09271), 2010), pp12-17, 2010
Jérôme Euzenat, Alfio Ferrara, Christian Meilicke, Andriy Nikolov, Juan Pane, François Scharffe, Pavel Shvaiko, Heiner Stuckenschmidt, Ondřej Sváb-Zamazal, Vojtech Svátek, Cássia Trojahn dos Santos, Results of the Ontology Alignment Evaluation Initiative 2010, in: Pavel Shvaiko, Jérôme Euzenat, Fausto Giunchiglia, Heiner Stuckenschmidt, Ming Mao, Isabel Cruz (eds), Proc. 5th ISWC workshop on ontology matching (OM), Shanghai (CN), pp85-117, 2010
Ontology matching consists of finding correspondences between entities of two ontologies. OAEI campaigns aim at comparing ontology matching systems on precisely defined test cases. Test cases can use ontologies of different nature (from simple directories to expressive OWL ontologies) and use different modalities, e.g., blind evaluation, open evaluation, consensus. OAEI-2010 builds over previous campaigns by having 4 tracks with 6 test cases followed by 15 participants. This year, the OAEI campaign introduces a new evaluation modality in association with the SEALS project. A subset of OAEI test cases is included in this new modality which provides more automation to the evaluation and more direct feedback to the participants. This paper is an overall presentation of the OAEI 2010 campaign.
Jérôme Euzenat, Christian Meilicke, Heiner Stuckenschmidt, Cássia Trojahn dos Santos, A web-based evaluation service for ontology matching, in: Proc. 9th demonstration track on international semantic web conference (ISWC), Shanghai (CN), pp93-96, 2010
Evaluation of semantic web technologies at large scale, including ontology matching, is an important topic of semantic web research. This paper presents a web-based evaluation service for automatically executing the evaluation of ontology matching systems. This service is based on the use of a web service interface wrapping the functionality of a matching tool to be evaluated and allows developers to launch evaluations of their tool at any time on their own. Furthermore, the service can be used to visualise and manipulate the evaluation results. The approach allows the execution of the tool on the machine of the tool developer without the need for a runtime environment.
Manfred Hauswirth, Jérôme Euzenat, Owen Friel, Keith Griffin, Pat Hession, Brendan Jennings, Tudor Groza, Siegfried Handschuh, Ivana Podnar Zarko, Axel Polleres, Antoine Zimmermann, Towards consolidated presence, in: Proc. 6th International conference on collaborative computing: networking, applications and worksharing (CollaborateCom), Chicago (IL US), pp1-10, 2010
Presence management, i.e., the ability to automatically identify the status and availability of communication partners, is becoming an invaluable tool for collaboration in enterprise contexts. In this paper, we argue for efficient presence management by means of a holistic view of both physical context and virtual presence in online communication channels. We sketch the components for enabling presence as a service integrating both online information as well as physical sensors, discussing benefits, possible applications on top, and challenges of establishing such a service.
Nuno Lopes, Axel Polleres, Alexandre Passant, Stefan Decker, Stefan Bischof, Diego Berrueta, Antonio Campos, Stéphane Corlosquet, Jérôme Euzenat, Orri Erling, Kingsley Idehen, Jacek Kopecky, Thomas Krennwallner, Davide Palmisano, Janne Saarela, Michal Zaremba, RDF and XML: Towards a unified query layer, in: Proc. W3C workshop on RDF next steps, Stanford (CA US), 2010
One of the requirements of current Semantic Web applications is to deal with heterogeneous data. The Resource Description Framework (RDF) is the W3C recommended standard for data representation, yet data represented and stored using the Extensible Markup Language (XML) is almost ubiquitous and remains the standard for data exchange. While RDF has a standard XML representation, XML Query languages are of limited use for transformations between natively stored RDF data and XML. Being able to work with both XML and RDF data using a common framework would be a great advantage and eliminate unnecessary intermediate steps that are currently used when handling both formats.
Giuseppe Pirrò, Jérôme Euzenat, A semantic similarity framework exploiting multiple parts-of-speech, in: Proc. 9th international conference on ontologies, databases, and applications of semantics (ODBASE), Heraklion (GR), (Robert Meersman, Tharam Dillon, Pilar Herrero (eds), On the move to meaningful internet systems, Lecture notes in computer science 6427, 2010), pp1118-1125, 2010
Semantic similarity aims at establishing resemblance by interpreting the meaning of the objects being compared. The Semantic Web can benefit from semantic similarity in several ways: ontology alignment and merging, automatic ontology construction, semantic-search, to cite a few. Current approaches mostly focus on computing similarity between nouns. The aim of this paper is to define a framework to compute semantic similarity even for other grammar categories such as verbs, adverbs and adjectives. The framework has been implemented on top of WordNet. Extensive experiments confirmed the suitability of this approach in the task of solving English tests.
Giuseppe Pirrò, Jérôme Euzenat, A feature and information theoretic framework for semantic similarity and relatedness, in: Proc. 9th conference on international semantic web conference (ISWC), Shanghai (CN), (Peter Patel-Schneider, Yue Pan, Pascal Hitzler, Peter Mika, Lei Zhang, Jeff Pan, Ian Horrocks, Birte Glimm (eds), The semantic web, Lecture notes in computer science 6496, 2010), pp615-630, 2010
Semantic similarity and relatedness measures between ontology concepts are useful in many research areas. While similarity only considers subsumption relations to assess how two objects are alike, relatedness takes into account a broader range of relations (e.g., part-of). In this paper, we present a framework, which maps the feature-based model of similarity into the information theoretic domain. A new way of computing IC values directly from an ontology structure is also introduced. This new model, called Extended Information Content (eIC) takes into account the whole set of semantic relations defined in an ontology. The proposed framework enables to rewrite existing similarity measures that can be augmented to compute semantic relatedness. Upon this framework, a new measure called FaITH (Feature and Information THeoretic) has been devised. Extensive experimental evaluations confirmed the suitability of the framework.
François Scharffe, Jérôme Euzenat, Méthodes et outils pour lier le web des données, in: Actes 17e conférenceAFIA-AFRIF sur reconnaissance des formes et intelligence artificielle (RFIA), Caen (FR), pp678-685, 2010
Le web des données consiste à publier des données sur le web de telle sorte qu'elles puissent être interprétées et connectées entre elles. Il est donc vital d'établir les liens entre ces données à la fois pour le web des données et pour le web sémantique qu'il contribue à nourrir. Nous proposons un cadre général dans lequel s'inscrivent les différentes techniques utilisées pour établir ces liens et nous montrons comment elles s'y insèrent. Nous proposons ensuite une architecture permettant d'associer les différents systèmes de liage de données et de les faire collaborer avec les systèmes développés pour la mise en correspondance d'ontologies qui présente de nombreux points communs avec la découverte de liens.
Semantic web, Data interlinking, Instance matching, Ontology alignment, Web of data
Cássia Trojahn dos Santos, Jérôme Euzenat, Consistency-driven argumentation for alignment agreement, in: Pavel Shvaiko, Jérôme Euzenat, Fausto Giunchiglia, Heiner Stuckenschmidt, Ming Mao, Isabel Cruz (eds), Proc. 5th ISWC workshop on ontology matching (OM), Shanghai (CN), pp37-48, 2010
Ontology alignment agreement aims at overcoming the problem that arises when different parties need to conciliate their conflicting views on ontology alignments. Argumentation has been applied as a way for supporting the creation and exchange of arguments, followed by the reasoning on their acceptability. Here we use arguments as positions that support or reject correspondences. Applying only argumentation to select correspondences may lead to alignments which relates ontologies in an inconsistent way. In order to address this problem, we define maximal consistent sub-consolidations which generate consistent and argumentation-grounded alignments. We propose a strategy for computing them involving both argumentation and logical inconsistency detection. It removes correspondences that introduce inconsistencies into the resulting alignment and allows for maintaining the consistency within an argumentation system. We present experiments comparing the different approaches. The (partial) experiments suggest that applying consistency checking and argumentation independently significantly improves results, while using them together does not bring so much. The features of consistency checking and argumentation leading to this result are analysed.
Cássia Trojahn dos Santos, Christian Meilicke, Jérôme Euzenat, Heiner Stuckenschmidt, Automating OAEI Campaigns (First Report), in: Asunción Gómez Pérez, Fabio Ciravegna, Frank van Harmelen, Jeff Heflin (eds), Proc. 1st ISWC international workshop on evaluation of semantic technologies (iWEST), Shanghai (CN), 2010
This paper reports the first effort into integrating OAEI and SEALS evaluation campaigns. The SEALS project aims at providing standardized resources (software components, data sets, etc.) for automatically executing evaluations of typical semantic web tools, including ontology matching tools. A first version of the software infrastructure is based on the use of a web service interface wrapping the functionality of a matching tool to be evaluated. In this setting, the evaluation results can visualized and manipulated immediately in a direct feedback cycle. We describe how parts of the OAEI 2010 evaluation campaign have been integrated into this software infrastructure. In particular, we discuss technical and organizational aspects related to the use of the new technology for both participants and organizers of the OAEI.
ontology matching, evaluation workflows, evaluation criteria, automating evaluation
Manuel Atencia, Jérôme Euzenat, Giuseppe Pirrò, Marie-Christine Rousset, Alignment-based trust for resource finding in semantic P2P networks, in: Proc. 10th conference on International semantic web conference (ISWC), Bonn (DE), (Lora Aroyo, Christopher Welty, Harith Alani, Jamie Taylor, Abraham Bernstein, Lalana Kagal, Natalya Noy, Eva Blomqvist (eds), The semantic web (Proc. 10th conference on International semantic web conference (ISWC)), Lecture notes in computer science 7031, 2011), pp51-66, 2011
In a semantic P2P network, peers use separate ontologies and rely on alignments between their ontologies for translating queries. Nonetheless, alignments may be limited -unsound or incomplete- and generate flawed translations, leading to unsatisfactory answers. In this paper we present a trust mechanism that can assist peers to select those in the network that are better suited to answer their queries. The trust that a peer has towards another peer depends on a specific query and represents the probability that the latter peer will provide a satisfactory answer. We have implemented the trust technique and conducted an evaluation. Experimental results showed that trust values converge as more queries are sent and answers received. Furthermore, the use of trust brings a gain in query-answering performance.
semantic alignment, trust, probabilistic populated ontology
Melisachew Wudage Chekol, Jérôme Euzenat, Pierre Genevès, Nabil Layaïda, PSPARQL query containment, in: Proc. 13th International symposium on database programming languages (DBPL), Seattle (WA US), 2011
Querying the semantic web is mainly done through SPARQL. This language has been studied from different perspectives such as optimization and extension. One of its extensions, PSPARQL (Path SPARQL) provides queries with paths of arbitrary length. We study the static analysis of queries written in this language, in particular, containment of queries: determining whether, for any graph, the answers to a query are contained in those of another query. Our approach consists in encoding RDF graphs as transition systems and queries as mu-calculus formulas and then reducing the containment problem to testing satisfiability in the logic.
Query containment, PSPARQL, Semantic web, RDF, Regular path queries
Jérôme Euzenat, Semantic technologies and ontology matching for interoperability inside and across buildings, in: Proc. 2nd CIB workshop on eeBuildings data models, Sophia-Antipolis (FR), pp22-34, 2011
There are many experiments with buildings that communicate information to and react to instructions from inhabiting systems. Fortunately, the life of people does not stop at the door of those buildings. It is thus very important that from one building to another, from a building to its outside, and from a building considered as a whole to specific rooms, continuity in the perceived information and potential actions be ensured. One way to achieve this would be by standardising representation vocabularies that any initiative should follow. But, at such an early stage, this would be an obstacle to innovation, because experimenters do not know yet what is needed in their context. We advocate that semantic technologies, in addition to be already recognised as a key component in communicating building platforms, are adequate tools for ensuring interoperability between building settings. For that purpose, we first present how these technologies (RDF, OWL, SPARQL, Alignment) can be used within ambient intelligent applications. Then, we review several solutions for ensuring interoperability between heterogeneous building settings, in particular through online embedded matching, alignment servers or collaborative matching. We describe the state of the art in ontology matching and how it can be used for providing interoperability between semantic descriptions.
Ontology matching, Ontology alignment, Alignment server, Context-based matching, Content-based matching, Context representation, Query mediation
Jérôme Euzenat, Alfio Ferrara, Willem Robert van Hague, Laura Hollink, Christian Meilicke, Andriy Nikolov, François Scharffe, Pavel Shvaiko, Heiner Stuckenschmidt, Ondřej Sváb-Zamazal, Cássia Trojahn dos Santos, Results of the Ontology Alignment Evaluation Initiative 2011, in: Pavel Shvaiko, Isabel Cruz, Jérôme Euzenat, Tom Heath, Ming Mao, Christoph Quix (eds), Proc. 6th ISWC workshop on ontology matching (OM), Bonn (DE), pp85-110, 2011
Ontology matching consists of finding correspondences between entities of two ontologies. OAEI campaigns aim at comparing ontology matching systems on precisely defined test cases. Test cases can use ontologies of different nature (from simple directories to expressive OWL ontologies) and use different modalities, e.g., blind evaluation, open evaluation, consensus. OAEI-2011 builds over previous campaigns by having 4 tracks with 6 test cases followed by 18 participants. Since 2010, the campaign introduces a new evaluation modality in association with the SEALS project. A subset of OAEI test cases is included in this new modality which provides more automation to the evaluation and more direct feedback to the participants. This paper is an overall presentation of the OAEI 2011 campaign.
Maria Roşoiu, Cássia Trojahn dos Santos, Jérôme Euzenat, Ontology matching benchmarks: generation and evaluation, in: Pavel Shvaiko, Isabel Cruz, Jérôme Euzenat, Tom Heath, Ming Mao, Christoph Quix (eds), Proc. 6th ISWC workshop on ontology matching (OM), Bonn (DE), pp73-84, 2011
The OAEI Benchmark data set has been used as a main reference to evaluate and compare matching systems. It requires matching an ontology with systematically modified versions of itself. However, it has two main drawbacks: it has not varied since 2004 and it has become a relatively easy task for matchers. In this paper, we present the design of a modular test generator that overcomes these drawbacks. Using this generator, we have reproduced Benchmark both with the original seed ontology and with other ontologies. Evaluating different matchers on these generated tests, we have observed that (a) the difficulties encountered by a matcher at a test are preserved across the seed ontology, (b) contrary to our expectations, we found no systematic positive bias towards the original data set which has been available for developers to test their systems, and (c) the generated data sets have consistent results across matchers and across seed ontologies. However, the discriminant power of the generated tests is still too low and more tests would be necessary to draw definitive conclusions.
Ontology matching, Matching evaluation, Test generation, Semantic web
François Scharffe, Jérôme Euzenat, Linked data meets ontology matching: enhancing data linking through ontology alignments, in: Proc. 3rd international conference on Knowledge engineering and ontology development (KEOD), Paris (FR), pp279-284, 2011
The Web of data consists of publishing data on the Web in such a way that they can be connected together and interpreted. It is thus critical to establish links between these data, both for the Web of data and for the Semantic Web that it contributes to feed. We consider here the various techniques which have been developed for that purpose and analyze their commonalities and differences. This provides a general framework that the diverse data linking systems instantiate. From this framework we consider the relation between data linking and ontology matching activities. Although, they can be considered similar at a certain level (they both relate formal entities), they serve different purposes: one acts at the schema level and the other at the instance level. However, they would find a mutual benefit at collaborating. We thus present a scheme under which it is possible for data linking tools to take advantage of ontology alignments. We present the features of expressive alignment languages that allows linking specifications to reuse ontology alignments in a natural way.
Semantic web, Linked data, Data linking, Ontology alignment, Ontology matching, Entity reonciliation, Object consolidation
José Luis Aguirre, Bernardo Cuenca Grau, Kai Eckert, Jérôme Euzenat, Alfio Ferrara, Willem Robert van Hage, Laura Hollink, Ernesto Jiménez-Ruiz, Christian Meilicke, Andriy Nikolov, Dominique Ritze, François Scharffe, Pavel Shvaiko, Ondřej Sváb-Zamazal, Cássia Trojahn dos Santos, Benjamin Zapilko, Results of the Ontology Alignment Evaluation Initiative 2012, in: Pavel Shvaiko, Jérôme Euzenat, Anastasios Kementsietsidis, Ming Mao, Natalya Noy, Heiner Stuckenschmidt (eds), Proc. 7th ISWC workshop on ontology matching (OM), Boston (MA US), pp73-115, 2012
Ontology matching consists of finding correspondences between semantically related entities of two ontologies. OAEI campaigns aim at comparing ontology matching systems on precisely defined test cases. These test cases can use ontologies of different nature (from simple thesauri to expressive OWL ontologies) and use different modalities, e.g., blind evaluation, open evaluation, consensus. OAEI 2012 offered 7 tracks with 9 test cases followed by 21 participants. Since 2010, the campaign has been using a new evaluation modality which provides more automation to the evaluation. This paper is an overall presentation of the OAEI 2012 campaign.
Manuel Atencia, Alexander Borgida, Jérôme Euzenat, Chiara Ghidini, Luciano Serafini, A formal semantics for weighted ontology mappings, in: Proc. 11th conference on International semantic web conference (ISWC), Boston (MA US), (Philippe Cudré-Mauroux, Jeff Heflin, Evren Sirin, Tania Tudorache, Jérôme Euzenat, Manfred Hauswirth, Josiane Xavier Parreira, James Hendler, Guus Schreiber, Abraham Bernstein, Eva Blomqvist (eds), The semantic web (Proc. 11th conference on International semantic web conference (ISWC)), Lecture notes in computer science 7649, 2012), pp17-33, 2012
Ontology mappings are often assigned a weight or confidence factor by matchers. Nonetheless, few semantic accounts have been given so far for such weights. This paper presents a formal semantics for weighted mappings between different ontologies. It is based on a classificational interpretation of mappings: if O1 and O2 are two ontologies used to classify a common set X, then mappings between O1 and O2 are interpreted to encode how elements of X classified in the concepts of O1 are re-classified in the concepts of O2, and weights are interpreted to measure how precise and complete re-classifications are. This semantics is justifiable by extensional practice of ontology matching. It is a conservative extension of a semantics of crisp mappings. The paper also includes properties that relate mapping entailment with description logic constructors.
Melisachew Wudage Chekol, Jérôme Euzenat, Pierre Genevès, Nabil Layaïda, SPARQL query containment under RDFS entailment regime, in: Proc. 6th International joint conference on automated reasoning (IJCAR), Manchester (UK), (Bernhard Gramlich, Dale Miller, Uli Sattler (eds), Proc. 6th International joint conference on automated reasoning (IJCAR), Lecture notes in computer science 7364, 2012), pp134-148, 2012
The problem of SPARQL query containment is defined as determining if the result of one query is included in the result of another one for any RDF graph. Query containment is important in many areas, including information integration, query optimization, and reasoning about Entity-Relationship diagrams. We encode this problem into an expressive logic called the mu-calculus where RDF graphs become transition systems, queries and schema axioms become formulas. Thus, the containment problem is reduced to formula satisfiability. Beyond the logic's expressive power, satisfiability solvers are available for it. Hence, this study allows to exploit these advantages.
Melisachew Wudage Chekol, Jérôme Euzenat, Pierre Genevès, Nabil Layaïda, SPARQL query containment under SHI axioms, in: Proc. 26th American national conference on artificial intelligence (AAAI), Toronto (ONT CA), pp10-16, 2012
SPARQL query containment under schema axioms is the problem of determining whether, for any RDF graph satisfying a given set of schema axioms, the answers to a query are contained in the answers of another query. This problem has major applications for verification and optimization of queries. In order to solve it, we rely on the mu-calculus. Firstly, we provide a mapping from RDF graphs into transition systems. Secondly, SPARQL queries and RDFS and SHI axioms are encoded into mu-calculus formulas. This allows us to reduce query containment and equivalence to satisfiability in the mu-calculus. Finally, we prove a double exponential upper bound for containment under SHI schema axioms.
Jérôme David, Jérôme Euzenat, Maria Roşoiu, Linked data from your pocket, in: Christophe Guéret, Stefan Schlobach, Florent Pigout (eds), Proc. 1st ESWC workshop on downscaling the semantic web, Hersounissos (GR), pp6-13, 2012
The paper describes a lightweight general purpose RDF framework for Android. It allows to deal uniformly with RDF, whether it comes from the web or from applications inside the device. It extends the Android content provider framework and introduces a transparent URI dereferencing scheme allowing for exposing device content as linked data.
Jérôme David, Jérôme Euzenat, Jason Jung, Experimenting with ontology distances in semantic social networks: methodological remarks, in: Proc. 2nd IEEE international conference on systems, man, and cybernetics (SMC), Seoul (KR), pp2909-2914, 2012
Semantic social networks are social networks using ontologies for characterising resources shared within the network. It has been postulated that, in such networks, it is possible to discover social affinities between network members through measuring the similarity between the ontologies or part of ontologies they use. Using similar ontologies should reflect the cognitive disposition of the subjects. The main concern of this paper is the methodological aspect of experimenting in order to validate or invalidate such an hypothesis. Indeed, given the current lack of broad semantic social networks, it is difficult to rely on available data and experiments have to be designed from scratch. For that purpose, we first consider experimental settings that could be used and raise practical and methodological issues faced with analysing their results. We then describe a full experiments carried out according to some identified modalities and report the obtained results. The results obtained seem to invalidate the proposed hypothesis. We discuss why this may be so.
Semantic social networks, Ontology distance, Ontology similarity, Personal ontologies, Experimental methodology
Jérôme Euzenat, A modest proposal for data interlinking evaluation, in: Pavel Shvaiko, Jérôme Euzenat, Anastasios Kementsietsidis, Ming Mao, Natalya Noy, Heiner Stuckenschmidt (eds), Proc. 7th ISWC workshop on ontology matching (OM), Boston (MA US), pp234-235, 2012
Data interlinking is a very important topic nowadays. It is sufficiently similar to ontology matching that comparable evaluation can be overtaken. However, it has enough differences, so that specific evaluations may be designed. We discuss such variations and design.
Data interlinking, Evaluation, Benchmark, Blocking, Instance matching
François Scharffe, Ghislain Atemezing, Raphaël Troncy, Fabien Gandon, Serena Villata, Bénédicte Bucher, Fayçal Hamdi, Laurent Bihanic, Gabriel Képéklian, Franck Cotton, Jérôme Euzenat, Zhengjie Fan, Pierre-Yves Vandenbussche, Bernard Vatant, Enabling linked data publication with the Datalift platform, in: Proc. AAAI workshop on semantic cities, Toronto (ONT CA), 2012
As many cities around the world provide access to raw public data along the Open Data movement, many questions arise concerning the accessibility of these data. Various data formats, duplicate identifiers, heterogeneous metadata schema descriptions, and diverse means to access or query the data exist. These factors make it difficult for consumers to reuse and integrate data sources to develop innovative applications. The Semantic Web provides a global solution to these problems by providing languages and protocols for describing and accessing datasets. This paper presents Datalift, a framework and a platform helping to lift raw data sources to semantic interlinked data sources.
Melisachew Wudage Chekol, Jérôme Euzenat, Pierre Genevès, Nabil Layaïda, Evaluating and benchmarking SPARQL query containment solvers, in: Proc. 12th conference on International semantic web conference (ISWC), Sydney (NSW AU), (Harith Alani, Lalana Kagal, Achile Fokoue, Paul Groth, Chris Biemann, Josiane Xavier Parreira, Lora Aroyo, Natalya Noy, Christopher Welty, Krzysztof Janowicz (eds), The semantic web (Proc. 12th conference on International semantic web conference (ISWC)), Lecture notes in computer science 8219, 2013), pp408-423, 2013
Query containment is the problem of deciding if the answers to a query are included in those of another query for any queried database. This problem is very important for query optimization purposes. In the SPARQL context, it can be equally useful. This problem has recently been investigated theoretically and some query containment solvers are available. Yet, there were no benchmarks to compare theses systems and foster their improvement. In order to experimentally assess implementation strengths and limitations, we provide a first SPARQL containment test benchmark. It has been designed with respect to both the capabilities of existing solvers and the study of typical queries. Some solvers support optional constructs and cycles, while other solvers support projection, union of conjunctive queries and RDF Schemas. No solver currently supports all these features or OWL entailment regimes. The study of query demographics on DBPedia logs shows that the vast majority of queries are acyclic and a significant part of them uses UNION or projection. We thus test available solvers on their domain of applicability on three different benchmark suites. These experiments show that (i) tested solutions are overall functionally correct, (ii) in spite of its complexity, SPARQL query containment is practicable for acyclic queries, (iii) state-of-the-art solvers are at an early stage both in terms of capability and implementation.
Bernardo Cuenca Grau, Zlatan Dragisic, Kai Eckert, Jérôme Euzenat, Alfio Ferrara, Roger Granada, Valentina Ivanova, Ernesto Jiménez-Ruiz, Andreas Oskar Kempf, Patrick Lambrix, Andriy Nikolov, Heiko Paulheim, Dominique Ritze, François Scharffe, Pavel Shvaiko, Cássia Trojahn dos Santos, Ondřej Zamazal, Results of the Ontology Alignment Evaluation Initiative 2013, in: Pavel Shvaiko, Jérôme Euzenat, Kavitha Srinivas, Ming Mao, Ernesto Jiménez-Ruiz (eds), Proc. 8th ISWC workshop on ontology matching (OM), Sydney (NSW AU), pp61-100, 2013
Ontology matching consists of finding correspondences between semantically related entities of two ontologies. OAEI campaigns aim at comparing ontology matching systems on precisely defined test cases. These test cases can use ontologies of different nature (from simple thesauri to expressive OWL ontologies) and use different modalities, e.g., blind evaluation, open evaluation and consensus. OAEI 2013 offered 6 tracks with 8 test cases followed by 23 participants. Since 2010, the campaign has been using a new evaluation modality which provides more automation to the evaluation. This paper is an overall presentation of the OAEI 2013 campaign.
Jérôme Euzenat, Uncertainty in crowdsourcing ontology matching, in: Pavel Shvaiko, Jérôme Euzenat, Kavitha Srinivas, Ming Mao, Ernesto Jiménez-Ruiz (eds), Proc. 8th ISWC workshop on ontology matching (OM), Sydney (NSW AU), pp221-222, 2013
Manuel Atencia, Jérôme David, Jérôme Euzenat, Data interlinking through robust linkkey extraction, in: Torsten Schaub, Gerhard Friedrich, Barry O'Sullivan (eds), Proc. 21st european conference on artificial intelligence (ECAI), Praha (CZ), pp15-20, 2014
Links are important for the publication of RDF data on the web. Yet, establishing links between data sets is not an easy task. We develop an approach for that purpose which extracts weak linkkeys. Linkkeys extend the notion of a key to the case of different data sets. They are made of a set of pairs of properties belonging to two different classes. A weak linkkey holds between two classes if any resources having common values for all of these properties are the same resources. An algorithm is proposed to generate a small set of candidate linkkeys. Depending on whether some of the, valid or invalid, links are known, we define supervised and non supervised measures for selecting the appropriate linkkeys. The supervised measures approximate precision and recall, while the non supervised measures are the ratio of pairs of entities a linkkey covers (coverage), and the ratio of entities from the same data set it identifies (discrimination). We have experimented these techniques on two data sets, showing the accuracy and robustness of both approaches.
Manuel Atencia, Jérôme David, Jérôme Euzenat, What can FCA do for database linkkey extraction?, in: Proc. 3rd ECAI workshop on What can FCA do for Artificial Intelligence? (FCA4AI), Praha (CZ), pp85-92, 2014
Links between heterogeneous data sets may be found by using a generalisation of keys in databases, called linkkeys, which apply across data sets. This paper considers the question of characterising such keys in terms of formal concept analysis. This question is natural because the space of candidate keys is an ordered structure obtained by reduction of the space of keys and that of data set partitions. Classical techniques for generating functional dependencies in formal concept analysis indeed apply for finding candidate keys. They can be adapted in order to find database candidate linkkeys. The question of their extensibility to the RDF context would be worth investigating.
Zlatan Dragisic, Kai Eckert, Jérôme Euzenat, Daniel Faria, Alfio Ferrara, Roger Granada, Valentina Ivanova, Ernesto Jiménez-Ruiz, Andreas Oskar Kempf, Patrick Lambrix, Stefano Montanelli, Heiko Paulheim, Dominique Ritze, Pavel Shvaiko, Alessandro Solimando, Cássia Trojahn dos Santos, Ondřej Zamazal, Bernardo Cuenca Grau, Results of the Ontology Alignment Evaluation Initiative 2014, in: Pavel Shvaiko, Jérôme Euzenat, Ming Mao, Ernesto Jiménez-Ruiz, Juanzi Li, Axel-Cyrille Ngonga Ngomo (eds), Proc. 9th ISWC workshop on ontology matching (OM), Riva del Garda (IT), pp61-104, 2014
Ontology matching consists of finding correspondences between semantically related entities of two ontologies. OAEI campaigns aim at comparing ontology matching systems on precisely defined test cases. These test cases can use ontologies of different nature (from simple thesauri to expressive OWL ontologies) and use different modalities, e.g., blind evaluation, open evaluation and consensus. OAEI 2014 offered 7 tracks with 9 test cases followed by 14 participants. Since 2010, the campaign has been using a new evaluation modality which provides more automation to the evaluation. This paper is an overall presentation of the OAEI 2014 campaign.
Jérôme Euzenat, First experiments in cultural alignment repair, in: Proc. 3rd ESWC workshop on Debugging ontologies and ontology mappings (WoDOOM), Hersounisos (GR), pp3-14, 2014
Alignments between ontologies may be established through agents holding such ontologies attempting at communicating and taking appropriate action when communication fails. This approach has the advantage of not assuming that everything should be set correctly before trying to communicate and of being able to overcome failures. We test here the adaptation of this approach to alignment repair, i.e., the improvement of incorrect alignments. For that purpose, we perform a series of experiments in which agents react to mistakes in alignments. The agents only know about their ontologies and alignments with others and they act in a fully decentralised way. We show that such a society of agents is able to converge towards successful communication through improving the objective correctness of alignments. The obtained results are on par with a baseline of a priori alignment repair algorithms.
The results of [
20140305-NOOR] are not correct due to various software bugs and the generated reference alignments. New results are [
20180308-NOOR] and [
20170208b-NOOR]. Conclusions hold for the former, they are more favorable to agents for the latter.
Ontology alignment, alignment repair, cultural knowkedge evolution, agent simulation, coherence, network of ontologies
Zhengjie Fan, Jérôme Euzenat, François Scharffe, Learning concise pattern for interlinking with extended version space, in: Dominik l zak, Hung Son Nguyen, Marek Reformat, Eugene Santos (eds), Proc. 13th IEEE/WIC/ACM international conference on web intelligence (WI), Warsaw (PL), pp70-77, 2014
Many data sets on the web contain analogous data which represent the same resources in the world, so it is helpful to interlink different data sets for sharing information. However, finding correct links is very challenging because there are many instances to compare. In this paper, an interlinking method is proposed to interlink instances across different data sets. The input is class correspondences, property correspondences and a set of sample links that are assessed by users as either "positive" or "negative". We apply a machine learning method, Version Space, in order to construct a classifier, which is called interlinking pattern, that can justify correct links and incorrect links for both data sets. We improve the learning method so that it resolves the no-conjunctive-pattern problem. We call it Extended Version Space. Experiments confirm that our method with only 1% of sample links already reaches a high F-measure (around 0.96-0.99). The F-measure quickly converges, being improved by nearly 10% than other comparable approaches.
Tatiana Lesnikova, Jérôme David, Jérôme Euzenat, Interlinking English and Chinese RDF data sets using machine translation, in: Johanna Völker, Heiko Paulheim, Jens Lehmann, Harald Sack, Vojtech Svátek (eds), Proc. 3rd ESWC workshop on Knowledge discovery and data mining meets linked open data (Know@LOD), Hersounisos (GR), 2014
Data interlinking is a difficult task particularly in a multilingual environment like the Web. In this paper, we evaluate the suitability of a Machine Translation approach to interlink RDF resources described in English and Chinese languages. We represent resources as text documents, and a similarity between documents is taken for similarity between resources. Documents are represented as vectors using two weighting schemes, then cosine similarity is computed. The experiment demonstrates that TF*IDF with a minimum amount of preprocessing steps can bring high results.
Semantic web, Cross-lingual link discovery, Cross-lingual instance linking, owl:sameAs
Armen Inants, Jérôme Euzenat, An algebra of qualitative taxonomical relations for ontology alignments, in: Proc. 14th conference on International semantic web conference (ISWC), Bethleem (PA US), (Marcelo Arenas, Óscar Corcho, Elena Simperl, Markus Strohmaier, Mathieu d'Aquin, Kavitha Srinivas, Paul Groth, Michel Dumontier, Jeff Heflin, Krishnaprasad Thirunarayan, Steffen Staab (eds), The Semantic Web - ISWC 2015. 14th International Semantic Web Conference, Bethlehem, Pennsylvania, United States, October 11-15, 2015, Lecture notes in computer science 9366, 2015), pp253-268, 2015
Algebras of relations were shown useful in managing ontology alignments. They make it possible to aggregate alignments disjunctively or conjunctively and to propagate alignments within a network of ontologies. The previously considered algebra of relations contains taxonomical relations between classes. However, compositional inference using this algebra is sound only if we assume that classes which occur in alignments have nonempty extensions. Moreover, this algebra covers relations only between classes. Here we introduce a new algebra of relations, which, first, solves the limitation of the previous one, and second, incorporates all qualitative taxonomical relations that occur between individuals and concepts, including the relations "is a" and "is not". We prove that this algebra is coherent with respect to the simple semantics of alignments.
Relation algebra, Ontology alignment, Network of ontologies
Tatiana Lesnikova, Jérôme David, Jérôme Euzenat, Interlinking English and Chinese RDF data using BabelNet, in: Pierre Genevès, Christine Vanoirbeek (eds), Proc. 15th ACM international symposium on Document engineering (DocEng), Lausanne (CH), pp39-42, 2015
Linked data technologies make it possible to publish and link structured data on the Web. Although RDF is not about text, many RDF data providers publish their data in their own language. Cross-lingual interlinking aims at discovering links between identical resources across knowledge bases in different languages. In this paper, we present a method for interlinking RDF resources described in English and Chinese using the BabelNet multilingual lexicon. Resources are represented as vectors of identifiers and then similarity between these resources is computed. The method achieves an F-measure of 88%. The results are also compared to a translation-based method.
Cross-lingual instance linking, Cross-lingual link discovery, owl:sameAs
Manel Achichi, Michelle Cheatham, Zlatan Dragisic, Jérôme Euzenat, Daniel Faria, Alfio Ferrara, Giorgos Flouris, Irini Fundulaki, Ian Harrow, Valentina Ivanova, Ernesto Jiménez-Ruiz, Elena Kuss, Patrick Lambrix, Henrik Leopold, Huanyu Li, Christian Meilicke, Stefano Montanelli, Catia Pesquita, Tzanina Saveta, Pavel Shvaiko, Andrea Splendiani, Heiner Stuckenschmidt, Konstantin Todorov, Cássia Trojahn dos Santos, Ondřej Zamazal, Results of the Ontology Alignment Evaluation Initiative 2016, in: Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh, Ryutaro Ichise (eds), Proc. 11th ISWC workshop on ontology matching (OM), Kobe (JP), pp73-129, 2016
Ontology matching consists of finding correspondences between semantically related entities of two ontologies. OAEI campaigns aim at comparing ontology matching systems on precisely defined test cases. These test cases can use ontologies of different nature (from simple thesauri to expressive OWL ontologies) and use different modalities, e.g., blind evaluation, open evaluation, or consensus. OAEI 2016 offered 9 tracks with 22 test cases, and was attended by 21 participants. This paper is an overall presentation of the OAEI 2016 campaign.
Michelle Cheatham, Zlatan Dragisic, Jérôme Euzenat, Daniel Faria, Alfio Ferrara, Giorgos Flouris, Irini Fundulaki, Roger Granada, Valentina Ivanova, Ernesto Jiménez-Ruiz, Patrick Lambrix, Stefano Montanelli, Catia Pesquita, Tzanina Saveta, Pavel Shvaiko, Alessandro Solimando, Cássia Trojahn dos Santos, Ondřej Zamazal, Results of the Ontology Alignment Evaluation Initiative 2015, in: Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh (eds), Proc. 10th ISWC workshop on ontology matching (OM), Bethlehem (PA US), pp60-115, 2016
Ontology matching consists of finding correspondences between semantically related entities of two ontologies. OAEI campaigns aim at comparing ontology matching systems on precisely defined test cases. These test cases can use ontologies of different nature (from simple thesauri to expressive OWL ontologies) and use different modalities, e.g., blind evaluation, open evaluation and consensus. OAEI 2015 offered 8 tracks with 15 test cases followed by 22 participants. Since 2011, the campaign has been using a new evaluation modality which provides more automation to the evaluation. This paper is an overall presentation of the OAEI 2015 campaign.
Jérôme Euzenat, Extraction de clés de liage de données (résumé étendu), in: Actes 16e conférence internationale francophone sur extraction et gestion des connaissances (EGC), Reims (FR), (Bruno Crémilleux, Cyril de Runz (éds), Actes 16e conférence internationale francophone sur extraction et gestion des connaissances (EGC), Revue des nouvelles technologies de l'information E30, 2016), pp9-12, 2016
De grandes quantités de données sont publiées sur le web des données. Les lier consiste à identifier les mêmes ressources dans deux jeux de données permettant l'exploitation conjointe des données publiées. Mais l'extraction de liens n'est pas une tâche facile. Nous avons développé une approche qui extrait des clés de liage (link keys). Les clés de liage étendent la notion de clé de l'algèbre relationnelle à plusieurs sources de données. Elles sont fondées sur des ensembles de couples de propriétés identifiant les objets lorsqu'ils ont les mêmes valeurs, ou des valeurs communes, pour ces propriétés. On présentera une manière d'extraire automatiquement les clés de liage candidates à partir de données. Cette opération peut être exprimée dans l'analyse formelle de concepts. La qualité des clés candidates peut-être évaluée en fonction de la disponibilité (cas supervisé) ou non (cas non supervisé) d'un échantillon de liens. La pertinence et de la robustesse de telles clés seront illustrées sur un exemple réel.
Maroua Gmati, Manuel Atencia, Jérôme Euzenat, Tableau extensions for reasoning with link keys, in: Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh, Ryutaro Ichise (eds), Proc. 11th ISWC workshop on ontology matching (OM), Kobe (JP), pp37-48, 2016
Link keys allow for generating links across data sets expressed in different ontologies. But they can also be thought of as axioms in a description logic. As such, they can contribute to infer ABox axioms, such as links, or terminological axioms and other link keys. Yet, no reasoning support exists for link keys. Here we extend the tableau method designed for ALC to take link keys into account. We show how this extension enables combining link keys with terminological reasoning with and without ABox and TBox and generate non trivial link keys.
Link key, Tableau method, Description logics, Semantic web
Armen Inants, Manuel Atencia, Jérôme Euzenat, Algebraic calculi for weighted ontology alignments, in: Proc. 15th conference on International semantic web conference (ISWC), Kobe (JP), (Paul Groth, Elena Simperl, Alasdair Gray, Marta Sabou, Markus Krötzsch, Freddy Lécué, Fabian Flöck, Yolanda Gil (eds), The Semantic Web - ISWC 2016, Lecture notes in computer science 9981, 2016), pp360-375, 2016
Alignments between ontologies usually come with numerical attributes expressing the confidence of each correspondence. Semantics supporting such confidences must generalise the semantics of alignments without confidence. There exists a semantics which satisfies this but introduces a discontinuity between weighted and non-weighted interpretations. Moreover, it does not provide a calculus for reasoning with weighted ontology alignments. This paper introduces a calculus for such alignments. It is given by an infinite relation-type algebra, the elements of which are weighted taxonomic relations. In addition, it approximates the non-weighted case in a continuous manner.
Weighted ontology alignment, Algebraic reasoning, Qualitative calculi
Tatiana Lesnikova, Jérôme David, Jérôme Euzenat, Cross-lingual RDF thesauri interlinking, in: Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Marko Grobelnik, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, Stelios Piperidis (eds), Proc. 10th international conference on Language resources and evaluation (LREC), Portoroz (SI), pp2442-2449, 2016
Various lexical resources are being published in RDF. To enhance the usability of these resources, identical resources in different data sets should be linked. If lexical resources are described in different natural languages, then techniques to deal with multilinguality are required for interlinking. In this paper, we evaluate machine translation for interlinking concepts, i.e., generic entities named with a common noun or term. In our previous work, the evaluated method has been applied on named entities. We conduct two experiments involving different thesauri in different languages. The first experiment involves concepts from the TheSoz multilingual thesaurus in three languages: English, French and German. The second experiment involves concepts from the EuroVoc and AGROVOC thesauri in English and Chinese respectively. Our results demonstrate that machine translation can be beneficial for cross-lingual thesauri interlining independently of a dataset structure.
Cross-lingual data interlinking, owl:sameAs, Thesaurus alignment
Manel Achichi, Michelle Cheatham, Zlatan Dragisic, Jérôme Euzenat, Daniel Faria, Alfio Ferrara, Giorgos Flouris, Irini Fundulaki, Ian Harrow, Valentina Ivanova, Ernesto Jiménez-Ruiz, Kristian Kolthoff, Elena Kuss, Patrick Lambrix, Henrik Leopold, Huanyu Li, Christian Meilicke, Majid Mohammadi, Stefano Montanelli, Catia Pesquita, Tzanina Saveta, Pavel Shvaiko, Andrea Splendiani, Heiner Stuckenschmidt, Élodie Thiéblin, Konstantin Todorov, Cássia Trojahn dos Santos, Ondřej Zamazal, Results of the Ontology Alignment Evaluation Initiative 2017, in: Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh (eds), Proc. 12th ISWC workshop on ontology matching (OM), Wien (AT), pp61-113, 2017
Ontology matching consists of finding correspondences between semantically related entities of different ontologies. The Ontology Alignment Evaluation Initiative (OAEI) aims at comparing ontology matching systems on precisely defined test cases. These test cases can be based on ontologies of different levels of complexity (from simple thesauri to expressive OWL ontologies) and use different evaluation modalities (e.g., blind evaluation, open evaluation, or consensus). The OAEI 2017 campaign offered 9 tracks with 23 test cases, and was attended by 21 participants. This paper is an overall presentation of that campaign.
Jomar da Silva, Fernanda Araujo Baião, Kate Revoredo, Jérôme Euzenat, Semantic interactive ontology matching: synergistic combination of techniques to improve the set of candidate correspondences, in: Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh (eds), Proc. 12th ISWC workshop on ontology matching (OM), Wien (AT), pp13-24, 2017
Ontology Matching is the task of finding a set of entity correspondences between a pair of ontologies, i.e. an alignment. It has been receiving a lot of attention due to its broad applications. Many techniques have been proposed, among which the ones applying interactive strategies. An interactive ontology matching strategy uses expert knowledge towards improving the quality of the final alignment. When these strategies are based on the expert feedback to validate correspondences, it is important to establish criteria for selecting the set of correspondences to be shown to the expert. A bad definition of this set can prevent the algorithm from finding the right alignment or it can delay convergence. In this work we present techniques which, when used simultaneously, improve the set of candidate correspondences. These techniques are incorporated in an interactive ontology matching approach, called ALINSyn. Experiments successfully show the potential of our proposal.
Ontology matching, WordNet, Interactive ontology matching, Ontology alignment, Interactive ontology alignment
Jérôme Euzenat, Interaction-based ontology alignment repair with expansion and relaxation, in: Proc. 26th International Joint Conference on Artificial Intelligence (IJCAI), Melbourne (VIC AU), pp185-191, 2017
Agents may use ontology alignments to communicate when they represent knowledge with different ontologies: alignments help reclassifying objects from one ontology to the other. These alignments may not be perfectly correct, yet agents have to proceed. They can take advantage of their experience in order to evolve alignments: upon communication failure, they will adapt the alignments to avoid reproducing the same mistake. Such repair experiments had been performed in the framework of networks of ontologies related by alignments. They revealed that, by playing simple interaction games, agents can effectively repair random networks of ontologies. Here we repeat these experiments and, using new measures, show that previous results were underestimated. We introduce new adaptation operators that improve those previously considered. We also allow agents to go beyond the initial operators in two ways: they can generate new correspondences when they discard incorrect ones, and they can provide less precise answers. The combination of these modalities satisfy the following properties: (1) Agents still converge to a state in which no mistake occurs. (2) They achieve results far closer to the correct alignments than previously found. (3) They reach again 100% precision and coherent alignments.
The results reported in this paper for operators addjoin and refadd are not accurate, due to a software error. The results reported were worse than they should have been. Updated results can be found in [
20180308-NOOR], [
20180311-NOOR] and [
20180529-NOOR].
Jérôme Euzenat, Crafting ontology alignments from scratch through agent communication, in: Proc. 20th International Conference on Principles and practice of multi-agent systems (PRIMA), Nice (FR), (Bo An, Ana Bazzan, João Leite, Serena Villata, Leendert van der Torre (eds), Proc. 20th International Conference on Principles and practice of multi-agent systems (PRIMA), Lecture notes in computer science 10621, 2017), pp245-262, 2017
Agents may use different ontologies for representing knowledge and take advantage of alignments between ontologies in order to communicate. Such alignments may be provided by dedicated algorithms, but their accuracy is far from satisfying. We already explored operators allowing agents to repair such alignments while using them for communicating. The question remained of the capability of agents to craft alignments from scratch in the same way. Here we explore the use of expanding repair operators for that purpose. When starting from empty alignments, agents fails to create them as they have nothing to repair. Hence, we introduce the capability for agents to risk adding new correspondences when no existing one is useful. We compare and discuss the results provided by this modality and show that, due to this generative capability, agents reach better results than without it in terms of the accuracy of their alignments. When starting with empty alignments, alignments reach the same quality level as when starting with random alignments, thus providing a reliable way for agents to build alignment from scratch through communication.
Ontology alignment, Alignment repair, Cultural knowkedge evolution, Agent simulation, Coherence, Network of ontologies
Jomar da Silva, Kate Revoredo, Fernanda Araujo Baião, Jérôme Euzenat, Interactive ontology matching: using expert feedback to select attribute mappings, in: Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh (eds), Proc. 13th ISWC workshop on ontology matching (OM), Monterey (CA US), pp25-36, 2018
Interactive Ontology Matching considers the participation of domain experts during the matching process of two ontologies. An important step of this process is the selection of mappings to submit to the expert. These mappings can be between concepts, attributes or relationships of the ontologies. Existing approaches define the set of mapping suggestions only in the beginning of the process before expert involvement. In previous work, we proposed an approach to refine the set of mapping suggestions after each expert feedback, benefiting from the expert feedback to form a set of mapping suggestions of better quality. In this approach, only concept mappings were considered during the refinement. In this paper, we show a new approach to evaluate the benefit of also considering attribute mappings during the interactive phase of the process. The approach was evaluated using the OAEI conference data set, which showed an increase in recall without sacrificing precision. The approach was compared with the state-of-the-art, showing that the approach has generated alignment with state-of-the-art quality.
Ontology matching, WordNet, Interactive ontology matching, Ontology alignment, Interactive ontology alignment
Jérôme David, Jérôme Euzenat, Pierre Genevès, Nabil Layaïda, Evaluation of query transformations without data, in: Proc. WWW workshop on Reasoning on Data (RoD), Lyon (FR), pp1599-1602, 2018
Query transformations are ubiquitous in semantic web query processing. For any situation in which transformations are not proved correct by construction, the quality of these transformations has to be evaluated. Usual evaluation measures are either overly syntactic and not very informative ---the result being: correct or incorrect--- or dependent from the evaluation sources. Moreover, both approaches do not necessarily yield the same result. We suggest that grounding the evaluation on query containment allows for a data-independent evaluation that is more informative than the usual syntactic evaluation. In addition, such evaluation modalities may take into account ontologies, alignments or different query languages as soon as they are relevant to query evaluation.
Jérôme David, Jérôme Euzenat, Jérémy Vizzini, Linkky: Extraction de clés de liage par une adaptation de l'analyse relationnelle de concepts, in: Actes 29e journées francophones sur Ingénierie des connaissances (IC), Nancy (FR), pp271-274, 2018
RDF, Clé de liage, Liage de données, Analyse relationelle de concepts, Analyse formelle de concepts, Network of ontologies
Manuel Atencia, Jérôme David, Jérôme Euzenat, Amedeo Napoli, Jérémy Vizzini, A guided walk into link key candidate extraction with relational concept analysis, in: Claudia d'Amato, Lalana Kagal (eds), Proc. on journal track of the International semantic web conference, Auckland (NZ), 2019
Data interlinking is an important task for linked data interoperability. One of the possible techniques for finding links is the use of link keys which generalise relational keys to pairs of RDF models. We show how link key candidates may be directly extracted from RDF data sets by encoding the extraction problem in relational concept analysis. This method deals with non functional properties and circular dependent link key expressions. As such, it generalises those presented for non dependent link keys and link keys over the relational model. The proposed method is able to return link key candidates involving several classes at once.
Formal Concept Analysis, Relational Concept Analysis, Linked data, Link key, Data interlinking, Resource Description Framework
Manuel Atencia, Jérôme David, Jérôme Euzenat, Several link keys are better than one, or extracting disjunctions of link key candidates, in: Proc. 10th ACM international conference on knowledge capture (K-Cap), Marina del Rey (CA US), pp61-68, 2019
Link keys express conditions under which instances of two classes of different RDF data sets may be considered as equal. As such, they can be used for data interlinking. There exist algorithms to extract link key candidates from RDF data sets and different measures have been defined to evaluate the quality of link key candidates individually. For certain data sets, however, it may be necessary to use more than one link key on a pair of classes to retrieve a more complete set of links. To this end, in this paper, we define disjunction of link keys, propose strategies to extract disjunctions of link key candidates from RDF data, and apply existing quality measures to evaluate them. We also report on experiments with these strategies.
Linked data, RDF, Data interlinking, Link key, Antichain
Jérôme Euzenat, Replicator-interactor in experimental cultural knowledge evolution, in: Proc. 2nd JOWO workshop on Interaction-Based Knowledge Sharing (WINKS), Graz (AT), 2019
Cultural evolution may be studied at a `macro' level, inspired from population dynamics, or at a `micro' level, inspired from genetics. The replicator-interactor model generalises the genotype-phenotype distinction of genetic evolution. Here, we consider how it can be applied to cultural knowledge evolution experiments. In particular, we consider knowledge as replicator and the behaviour it induces as interactor. We show that this requires to address problems concerning transmission. We discuss the introduction of horizontal transmission within the replicator-interactor model and/or differential reproduction within cultural evolution experiments.
Manuel Atencia, Jérôme David, Jérôme Euzenat, Liliana Ibanescu, Nathalie Pernelle, Fatiha Saïs, Élodie Thiéblin, Cássia Trojahn dos Santos, Discovering expressive rules for complex ontology matching and data interlinking, in: Pavel Shvaiko, Jérôme Euzenat, Oktie Hassanzadeh, Ernesto Jiménez-Ruiz, Cássia Trojahn dos Santos (eds), Proc. 14th ISWC workshop on ontology matching (OM), Auckland (NZ), pp199-200, 2020
Ontology matching and data interlinking as distinguished tasks aim at facilitating the interoperability between different knowledge bases. Although the field has fully developed in the last years, most works still focus on generating simple correspondences between entities. These correspondences are however insufficient to fully cover the different types of heterogeneity between the knowledge base and complex correspondences are therefore required. Compared to simple matching, few approaches for complex matching have been proposed, focusing on correspondence patterns or exploiting common instances between the ontologies. Similarly, unsupervised data interlinking approaches (which do not require labelled data samples) have recently been developed. One approach consists in discovering linking rules such as simple keys or conditional keys on unlabelled data. The results have shown that the more expressive the rules, the higher the recall. Even more expressive rules (referential expressions, graph keys, etc.) are rather required, however naive approaches to the discovery of these rules can not be envisaged on large data sets. Existing approaches presuppose either that the data conform to the same ontology or that all possible pairs of properties be examined. Complementary, link keys are a set of pairs of properties that identify the instances of two classes of two RDF datasets. Such, link keys may be directly extracted without the need for an alignment. We introduce here an approach that aims at evaluating the impact of complex correspondences in the task of data interlinking established from the application of keys.
Data interlinking, Ontology matching, Complex correspondence
Line van den Berg, Manuel Atencia, Jérôme Euzenat, Agent ontology alignment repair through dynamic epistemic logic, in: Bo An, Neil Yorke-Smith, Amal El Fallah Seghrouchni, Gita Sukthankar (eds), Proc. 19th ACM international conference on Autonomous Agents and Multi-Agent Systems (AAMAS), Auckland (NZ), pp1422-1430, 2020
Ontology alignments enable agents to communicate while preserving heterogeneity in their information. Alignments may not be provided as input and should be able to evolve when communication fails or when new information contradicting the alignment is acquired. In the Alignment Repair Game (ARG) this evolution is achieved via adaptation operators. ARG was evaluated experimentally and the experiments showed that agents converge towards successful communication and improve their alignments. However, whether the adaptation operators are formally correct, complete or redundant is still an open question. In this paper, we introduce a formal framework based on Dynamic Epistemic Logic that allows us to answer this question. This framework allows us (1) to express the ontologies and alignments used, (2) to model the ARG adaptation operators through announcements and conservative upgrades and (3) to formally establish the correctness, partial redundancy and incompleteness of the adaptation operators in ARG.
The refine operator is not partially redundant with respect to Agent b (because it has no way to detect the incoherence from the announcement alone).
Ontology alignment, Alignment repair, Agent communication, Dynamic Epistemic Logic
Line van den Berg, Manuel Atencia, Jérôme Euzenat, Unawareness in multi-agent systems with partial valuations, in: Proc. 10th AAMAS workshop on Logical Aspects of Multi-Agent Systems (LAMAS), Auckland (NZ), 2020
Public signature awareness is satisfied if agents are aware of the vocabulary, propositions, used by other agents to think and talk about the world. However, assuming that agents are fully aware of each other's signatures prevents them to adapt their vocabularies to newly gained information, from the environment or learned through agent communication. Therefore this is not realistic for open multi-agent systems. We propose a novel way to model awareness with partial valuations that drops public signature awareness and can model agent signature unawareness, and we give a first view on defining the dynamics of raising and forgetting awareness on this framework.
Awareness, Dynamic Epistemic Logic, Partial valuations, Multi-agent systems
Yasser Bourahla, Manuel Atencia, Jérôme Euzenat, Knowledge improvement and diversity under interaction-driven adaptation of learned ontologies, in: Ulle Endriss, Ann Nowé, Frank Dignum, Alessio Lomuscio (eds), Proc. 20th ACM international conference on Autonomous Agents and Multi-Agent Systems (AAMAS), London (UK), pp242-250, 2021
When agents independently learn knowledge, such as ontologies, about their environment, it may be diverse, incorrect or incomplete. This knowledge heterogeneity could lead agents to disagree, thus hindering their cooperation. Existing approaches usually deal with this interaction problem by relating ontologies, without modifying them, or, on the contrary, by focusing on building common knowledge. Here, we consider agents adapting ontologies learned from the environment in order to agree with each other when cooperating. In this scenario, fundamental questions arise: Do they achieve successful interaction? Can this process improve knowledge correctness? Do all agents end up with the same ontology? To answer these questions, we design a two-stage experiment. First, agents learn to take decisions about the environment by classifying objects and the learned classifiers are turned into ontologies. In the second stage, agents interact with each other to agree on the decisions to take and modify their ontologies accordingly. We show that agents indeed reduce interaction failure, most of the time they improve the accuracy of their knowledge about the environment, and they do not necessarily opt for the same ontology.
Ontology, Multi-agent social simulation, Multi-agent learning, Knowledge diversity
Jérôme Euzenat, Fixed-point semantics for barebone relational concept analysis, in: Proc. 16th international conference on formal concept analysis (ICFCA), Strasbourg (FR), (Agnès Braud, Aleksey Buzmakov, Tom Hanika, Florence Le Ber (eds), Proc. 16th international conference on formal concept analysis (ICFCA), Lecture notes in computer science 12733, 2021), pp20-37, 2021
Relational concept analysis (RCA) extends formal concept analysis (FCA) by taking into account binary relations between formal contexts. It has been designed for inducing description logic TBoxes from ABoxes, but can be used more generally. It is especially useful when there exist circular dependencies between objects. In this case, it extracts a unique stable concept lattice family grounded on the initial formal contexts. However, other stable families may exist whose structure depends on the same relational context. These may be useful in applications that need to extract a richer structure than the minimal grounded one. This issue is first illustrated in a reduced version of RCA, which only retains the relational structure. We then redefine the semantics of RCA on this reduced version in terms of concept lattice families closed by a fixed-point operation induced by this relational structure. We show that these families admit a least and greatest fixed point and that the well-grounded RCA semantics is characterised by the least fixed point. We then study the structure of other fixed points and characterise the interesting lattices as the self-supported fixed points.
Formal Concept Analysis
Jérôme Euzenat, The web as a culture broth for agents and people to grow knowledge, in: Proc. Dagstuhl seminar on Autonomous agents on the web, Wadern (DE), (Olivier Boissier, Andrei Ciortea, Andreas Harth, Alessandro Ricci (eds), Autonomous agents on the web (seminar 21072), Dagstuhl reports 11(1), 2021), pp40-41, 2021
Yasser Bourahla, Manuel Atencia, Jérôme Euzenat, Knowledge transmission and improvement across generations do not need strong selection, in: Piotr Faliszewski, Viviana Mascardi, Catherine Pelachaud, Matthew Taylor (eds), Proc. 21st ACM international conference on Autonomous Agents and Multi-Agent Systems (AAMAS), (Online), pp163-171, 2022
Agents have been used for simulating cultural evolution and cultural evolution can be used as a model for artificial agents. Previous results have shown that horizontal, or intra-generation, knowledge transmission allows agents to improve the quality of their knowledge to a certain level. Moreover, variation generated through vertical, or inter-generation, transmission allows agents to exceed that level. Such results were obtained under specific conditions such as the drastic selection of agents allowed to transmit their knowledge, seeding the process with correct knowledge or introducing artificial noise during transmission. Here, we question the necessity of such measures and study their impact on the quality of transmitted knowledge. For that purpose, we combine the settings of two previous experiments and relax these conditions (no strong selection of teachers, no fully correct seed, no introduction of artificial noise). The rationale is that if interactions lead agents to improve their overall knowledge quality, this should be sufficient to ensure correct knowledge transmission, and that transmission mechanisms are sufficiently imperfect to produce variation. In this setting, we confirm that vertical transmission improves on horizontal transmission even without drastic selection and oriented learning. We also show that horizontal transmission is able to compensate for the lack of parent selection if it is maintained for long enough. This means that it is not necessary to take the most successful agents as teachers, neither in vertical nor horizontal transmission, to cumulatively improve knowledge.
Ontology, Multi-agent social simulation, Multi-agent learning, Knowledge diversity
Yasser Bourahla, Manuel Atencia, Jérôme Euzenat, Transmission de connaissances et sélection, in: Valérie Camps (éd), Actes 30e journées francophones sur Systèmes multi-agent (JFSMA), Saint-Étienne (FR), pp63-72, 2022
Les agents peuvent être utilisés pour simuler l'évolution culturelle et l'évolution culturelle peut être utilisée comme modèle pour les agents artificiels. Des expériences ont montré que la transmission intragénérationnelle des connaissances permet aux agents d'en améliorer la qualité. De plus, sa transmission intergénérationnelle permet de dépasser ce niveau. Ces résultats ont été obtenus dans des conditions particulières: sélection drastique des agents transmetant leurs connaissances, initialisation avec des connaissances correctes ou introduction de bruit lors de la transmission. Afin d'étudier l'impact de ces mesures sur la qualité de la connaissance transmise, nous combinons les paramètres de deux expériences précédentes et relâchons ces conditions. Ce dispositif confirme que la transmission verticale permet d'améliorer la qualité de la connaissance obtenue par transmission horizontale même sans sélection drastique et apprentissage orienté. Il montre également qu'une transmission intragénérationnelle suffisante peut compenser l'absence de sélection parentale.
Simulation sociale multi-agents, Évolution culturelle, Transmission des connaissances, Génération d'agents, Évolution culturelle des connaissances
Yasser Bourahla, Jérôme David, Jérôme Euzenat, Meryem Naciri, Measuring and controlling knowledge diversity, in: Tiago Prince Sales, Maria Hedblom, He Tan, Lucía Gómez Álvarez, Rafael Peñaloza, Srdjan Vesic (eds), Proc. 1st JOWO workshop on formal models of knowledge diversity (FMKD), Jönköping (SE), 2022
Assessing knowledge diversity may be useful for many purposes. In particular, it is necessary to measure diversity in order to understand how it arises or is preserved; it is also necessary to control it in order to measure its effects. Here we consider measuring knowledge diversity using two components: (a) a diversity measure taking advantage of (b) a knowledge difference measure. We present the general principles and various candidates for such components. We discuss how these measures may be used to generate populations of agents with controlled levels of knowledge diversity.
Knowledge diversity, Diversity measure, Ontology dissimilarity, Diversity control, Entropy
Yasser Bourahla, Manuel Atencia, Jérôme Euzenat, Inter-generation knowledge transmission without individual selection, in: Proc. 4th conference on Conference of the Cultural evolution society, Aarhus (DK), 2022
cultural knowledge evolution, vertical transmission, horizontal transmission, multi-agent simulation, knowledge accuracy
Jérôme Euzenat, Beyond reproduction, experiments want to be understood, in: Proc. 2nd workshop on Scientific knowledge: representation, discovery, and assessment (SciK), Lyon (FR), pp774-778, 2022
The content of experiments must be semantically described. This topic has already been largely covered. However, some neglected benefits of such an approach provide more arguments in favour of scientific knowledge graphs. Beyond being searchable through flat metadata, a knowledge graph of experiment descriptions may be able to provide answers to scientific and methodological questions. This includes identifying non experimented conditions or retrieving specific techniques used in experiments. In turn, this is useful for researchers as this information can be used for repurposing experiments, checking claimed results or performing meta-analyses.
e-science, scientific knowledge graphs, semantic experiment description, semantic technologies
Jérôme Euzenat, Can AI systems culturally evolve their knowledge?, in: Proc. 4th conference on Conference of the Cultural evolution society, Aarhus (DK), 2022
agent-based models, computational cultural knowledge evolution, artificial intelligence
Jérôme Euzenat, Society = Autonomy + Adaptation, in: Proc. Dagstuhl seminar on Agents on the web, Wadern (DE), (Olivier Boissier, Andrei Ciortea, Andreas Harth, Alessandro Ricci, Danai Vachtsevanou (eds), Agents on the web (seminar 21072), Dagstuhl reports 13(2), 2023), pp86, 2023
What makes a true lively society is the capability of their members to autonomously adapt to others. It is not a set of norms cast in iron, be they programming norms or ’legal norms’. It is a set of beings trying to behave with others. This behaviour may lead to explicit norms that make explicit what should not have/need to be reinvented, but they may well remain implicit, hence continuously adapted. We should design software agents so that they are able to elaborate what drives their (social but not only) behaviours. They should be allowed to try, to make mistakes, and to transmit what they know. This is the ground on which evolution may happen. This capacity is what should be built in agents in order for them to behave without breaking too many things. The goal is not to reach a static equilibrium: in an open-ended agent space there are always opportunities to learn new things, meet new people and visit new places. Hence, rather than the state reached by agents, this is they ability to surf a dynamic disequilibrium that must be sought. This statement is somewhat made for triggering reactions within the seminar. It reacts to the apparent loss of autonomy of agents. It also extend the one I did for the previous seminar.
Andreas Kalaitzakis, Jérôme Euzenat, À quoi sert la spécialisation en évolution culturelle de la connaissance?, in: Maxime Morge (éd), Actes 31e journées francophones sur Systèmes multi-agent (JFSMA), Strasbourg (FR), pp76-85, 2023
Des agents peuvent faire évoluer leurs ontologies en accomplissant conjointement une tâche. Nous considérons un ensemble de tâches dont chaque agent ne considère qu'une partie. Nous supposons que moins un agent considère de tâches, plus la précision de sa meilleure tâche sera élevée. Pour le vérifier, nous simulons différentes populations considérant un nombre de tâches croissant. De manière contre-intuitive, l'hypothèse n'est pas vérifiée. D'une part, lorsque les agents ont une mémoire illimitée, plus un agent considère de tâches, plus il est précis. D'autre part, lorsque les agents ont une mémoire limitée, les objectifs de maximiser la précision de leur meilleures tâches et de s'accorder entre eux sont mutuellement exclusifs. Lorsque les sociétés favorisent la spécialisation, les agents n'améliorent pas leur précision. Cependant, ces agents décideront plus souvent en fonction de leurs meilleures tâches, améliorant ainsi la performance de leur société.
Evolution culturelle de la connaissance, Simulation multi-agents, Spécialisation des agents
Andreas Kalaitzakis, Jérôme Euzenat, Multi-tasking resource-constrained agents reach higher accuracy when tasks overlap, in: Proc. 20th European conference on multi-agents systems (EUMAS), Napoli (IT), (Vadim Malvone, Aniello Murano (eds), Proc. 20th European conference on multi-agents systems (EUMAS), Lecture notes in computer science 14282, 2023), pp425-434, 2023
Agents have been previously shown to evolve their ontologies while interacting over a single task. However, little is known about how interacting over several tasks affects the accuracy of agent ontologies. Is knowledge learned by tackling one task beneficial for another task? We hypothesize that multi-tasking agents tackling tasks that rely on the same properties, are more accurate than multi-tasking agents tackling tasks that rely on different properties. We test this hypothesis by varying two parameters. The first parameter is the number of tasks assigned to the agents. The second parameter is the number of common properties among these tasks. Results show that when deciding for different tasks relies on the same properties, multi-tasking agents reach higher accuracy. This suggests that when agents tackle several tasks, it is possible to transfer knowledge from one task to another.
Cultural knowledge evolution, Knowledge transfer, Multi-tasking
Luisa Werner, Pierre Genevès, Nabil Layaïda, Jérôme Euzenat, Damien Graux, Reproduce, replicate, reevaluate: the long but safe way to extend machine learning methods, in: Proc. 38th AAAI Conference on Artificial Intelligence (AAAI), Vancouver (CA), pp15850-15858, 2024
Reproducibility is a desirable property of scientific research. On the one hand, it increases confidence in results. On the other hand, reproducible results can be extended on a solid basis. In rapidly developing fields such as machine learning, the latter is particularly important to ensure the reliability of research. In this paper, we present a systematic approach to reproducing (using the available implementation), replicating (using an alternative implementation) and reevaluating (using different datasets) state-of-the-art experiments. This approach enables the early detection and correction of deficiencies and thus the development of more robust and transparent machine learning methods. We detail the independent reproduction, replication, and reevaluation of the initially published experiments with a method that we want to extend. For each step, we identify issues and draw lessons learned. We further discuss solutions that have proven effective in overcoming the encountered problems. This work can serve as a guide for further reproducibility studies and generally improve reproducibility in machine learning.
Transparent, interpretable, explainable machine learning, Ethics, bias, and fairness, Graph-based machine learning, Neuro-symbolic learning, Representation learning
PhD theses/Thèses de doctorats et habilitations
Jérôme Euzenat, Un système de maintenance de la vérité à propagation de contextes, Thèse d'informatique, Université Joseph Fourier, Grenoble (FR), 131p., février 1990
Le raisonnement hypothétique consiste à compléter la connaissance disponible afin de poursuivre un raisonnement. L'aide aux utilisateurs de systèmes de raisonnement hypothétique nécessite la conception d'algorithmes spécifiques, pour pouvoir gérer efficacement les hypothèses et leurs conséquences et pour permettre de poser automatiquement des hypothèses. Cette dernière exigence conduit à implémenter un raisonnement non monotone. Les systèmes de maintenance de la vérité enregistrent les inférences produites par un système de raisonnement sous forme d'un graphe de dépendances et se chargent de garantir la cohérence des formules présentes dans une base de connaissance. Deux types de systèmes de maintenance de la vérité ont été proposés: (i) Les systèmes à propagation acceptent des inférences non monotones et propagent la validité absolue au sein du graphe de dépendances. L'étiquetage obtenu représente une interprétation du graphe. (ii) Les systèmes à contextes n'acceptent que des inférences monotones mais propagent des étiquettes dénotant les contextes dans lesquels les formules doivent être présentes. Ils permettent donc de raisonner sous plusieurs contextes simultanément. Le but de ce travail est de concevoir un système qui combine leurs avantages. Il permet de raisonner simultanément sous plusieurs contextes à l'aide d'inférences non monotones. Pour cela, des environnements capables de tenir compte de l'absence d'hypothèses sont définis. Une interprétation est associée à ces environnements et est étendue aux noeuds du graphe de dépendances, en accord avec l'interprétation des systèmes à propagation. Cela permet d'établir la signification des étiquettes associées aux noeuds du graphe, et de proposer de multiples possibilités de soumettre des requêtes au système. Un système correspondant à cette caractérisation, le CP-TMS, est implémenté comme une extension des systèmes de maintenance de la vérité à propagation. Cette implémentation est décrite ici, puis critiquée.
Mécanisation du raisonnement, raisonnement hypothétique, raisonnement non monotone, maintenance de la vérité
Jérôme Euzenat, Représentations de connaissance: de l'approximation à la confrontation, Habilitation à diriger des recherches, Université Joseph Fourier, Grenoble (FR), janvier 1999
référence INRIA TH-015
Un formalisme de représentation de connaissance a pour but de permettre la modélisation d'un domaine particulier. Bien entendu, il existe divers langages de ce type et, au sein d'un même langage, divers modèles peuvent représenter un même domaine. Ce mémoire est consacré à l'étude des rapports entre de multiples représentations de la même situation. Il présente les travaux de l'auteur entre 1992 et 1998 en progressant de la notion d'approximation, qui fonde la représentation, vers la confrontation entre les diverses représentations. Tout d'abord la notion d'approximation au sein des représentations de connaissance par objets est mise en avant, en particulier en ce qui concerne l'ensemble des mécanismes tirant parti de la structure taxonomique (classification, catégorisation, inférence de taxonomie). À partir de la notion de système classificatoire qui permet de rendre compte de ces mécanismes de manière unique on montre comment un système de représentation de connaissance peut être construit. Le second chapitre introduit la possibilité de tirer parti de multiples taxonomies (sur le même ensemble d'objets) dans un système de représentation de connaissance. La multiplicité des représentations taxonomiques est alors introduite en tant que telle et justifiée. Ces multiples taxonomies sont replacées dans le cadre des systèmes classificatoires présentés auparavant. La notion de granularité, qui fait l'objet du troisième chapitre, concerne la comparaison de représentations diverses de la même situation sachant qu'elles ont un rapport très particulier entre elles puisqu'elles représentent la même situation sous différentes granularités. À la différence des autres chapitres, celui-ci n'est pas situé dans le cadre des représentations de connaissance par objets mais dans celui des algèbres de relations binaires utilisées pour représenter le temps et l'espace. Le quatrième chapitre, enfin, va vers la confrontation des différentes représentations de manière à en tirer le meilleur parti (obtenir une représentation consensuelle ou tout simplement une représentation consistante). Le but des travaux qui y sont présentés est de développer un système d'aide à la construction collaborative de bases de connaissance consensuelles. À cette fin, les utilisateurs veulent mettre dans une base commune (qui doit être consistante et consensuelle) le contenu de leurs bases de connaissance individuelles. Pour cela, deux problèmes particuliers sont traités : la conception d'un mécanisme de révision, pour les représentations de connaissance par objets, permettant aux utilisateurs de traiter les problèmes d'inconsistance et la conception d'un protocole de soumission de connaissance garantissant l'obtention d'une base commune consensuelle. Cet aperçu partiel des travaux possibles dans l'étude des relations entre représentations est limité, mais il met en évidence le caractère non impératif des solutions proposées qui s'appliquent bien au cadre où le modélisateur interagit avec le système de représentation.
représentation de connaissance, approximation, bases de connaissance, modélisation, représentation par objets, point de vue, passerelle, classification, catégorisation, inférence de taxonomie, granularité, représentation temporelle, algèbre de relations binaires, révision, consensus, TROEPS, CO4
Prospective reports/Rapports de prospective
Jérôme Euzenat (ed), Research challenges and perspectives of the Semantic web, EU-NSF Strategic report, ERCIM, Sophia Antipolis (FR), 82p., January 2002
Lecture notes/Notes de cours
Jérôme Euzenat, Sémantique des représentations de connaissance, Notes de cours, université Joseph Fourier, Grenoble (FR), 125p., décembre 1998
Jérôme Euzenat, Semantic web semantics, Lecture notes, université Joseph Fourier, Grenoble (FR), 190p., 2007
Reviews/Recensions
Jérôme Euzenat, A theory of computer semiotics par Peter Bøgh Andersen, Bulletin de l'AFIA 55:55-58, 2003
Jérôme Euzenat, Amedeo Napoli, Spinning the semantic web: bringing the world wide web to its full potential par Dieter Fensel, James Hendler, Henry Lieberman and Wolfgang Wahlster, Bulletin de l'AFIA 56-57:18-21, 2003
Jérôme Euzenat, Amedeo Napoli, The semantic web: year one (Spinning the semantic web: bringing the world wide web to its full potential by Dieter Fensel, James Hendler, Henry Lieberman and Wolfgang Wahlster), IEEE Intelligent systems 18(6):76-78, 2003
Jérôme Euzenat, De la langue à la connaissance: approche expérimentale de l'évolution culturelle, Bulletin de l'AFIA 100:9-12, 2018
Technical reports/Rapports de recherche
Jérôme Euzenat, Le système de maintenance de la vérité à propagation de contextes, Rapport de recherche 779, IMAG, Grenoble (FR), 42p., mai 1989
Les systèmes de maintenance de la vérité ont été conçus pour raisonner à l'aide de connaissance incomplète. Le CP-TMS est un système de maintenance de la vérité tentant de combiner les avantages des systèmes à propagation (TMS) - autorisant l'utilisation d'inférences non monotones - et des systèmes à contextes (ATMS) - considérant le raisonnement sous plusieurs contextes simultanément. Il maintient un graphe de dépendances entre les objets manipulés par un système de raisonnement et propage à travers ce graphe les contextes dans lesquels les noeuds sont valides. Ces contextes prennent en compte l'incomplétude des bases de connaissance et permettent d'exprimer des inférences non monotones. Une théorie de l'interprétation des contextes est présentée. Elle garantit certaines bonnes propriétés aux contextes manipulés par l'implémentation. Le système garantit la consistance des contextes manipulés et permet de répondre à des requêtes concernant différents contextes simultanément au regard de la base théorique ainsi posée.
Systèmes de maintenance de la vérité, Raisonnement non monotone, Raisonnement multimonde, Raisonnement hypothétique
Jérôme Euzenat, Multiple labelling generators in non monotonic RMS graphs, Research report 2076, INRIA Rhône-Alpes, Grenoble (FR), 49p., October 1993
Non monotonic reason maintenance systems (RMS) are able, provided with a dependency graph (which represents a reasoning), to return a weakly grounded labelling of that graph (which represents a set of beliefs that the reasoner can hold). There can be several weakly grounded labellings. This work investigates the labelling process of these graphs in order to find parts of the graph which lead to multiple labellings: the multiple labelling generators (MLG). Two criteria are presented in order to isolate them. It is proved that:
- they do not belong to stratified even strongly connected components (SCC) of the complete support graph.
- they are successive initial SCC of unlabelled part of alternate even SCC.
Previous algorithms from Doyle and Goodwin are considered and new ones are put forward. This leads to a better understanding of labelling generation mechanisms and previous algorithms. They are discussed from the stand-point of the properties of correctness and potential completeness (the ability to find one but any of the labellings).
Non monotonic reasoning, truth maintenance, TMS, reason maintenance, RMS, dependency graph, label propagation
Jérôme Euzenat, Granularité dans les représentations spatio-temporelles, Rapport de recherche 2242, INRIA Rhône-Alpes, Grenoble (FR), 62p., avril 1994
Afin de représenter le temps sous plusieurs niveaux de détail, une représentation temporelle granulaire est proposée. Une telle représentation dispose les entités temporelles dans différents espaces organisés hiérarchiquement et nommés granularités. Elle conduit à conserver la représentation symbolique du temps et à simplifier la représentation numérique. Par contre, elle nécessite la définition d'opérateurs de conversion des représentations entre deux granularités afin de pouvoir utiliser une même entité temporelle sous différentes granularités.
Les propriétés que doivent respecter ces opérateurs afin de conserver les interprétations classiques de ces représentations sont exposées et des opérateurs de conversion symboliques et numériques sont proposés. Sous l'aspect symbolique, les opérateurs sont compatibles avec la représentation des relations temporelles sous forme d'algèbre de points et d'intervalles. En ce qui concerne la conversion numérique, certaines contraintes doivent être ajoutées afin de disposer des propriétés escomptées. Enfin, des possibilités d'utilisation de la latitude laissée par la définition des opérateurs sont discutées et l'extension de la représentation granulaire à d'autres espaces est explorée.
Représentation temporelle, Représentation spatiale, Points de vue, Granularité, Localité, Histoire
Isabelle Crampé, Jérôme Euzenat, Fondements de la révision dans un langage d'objets simple, Rapport de recherche 3060, INRIA Rhône-Alpes, Grenoble (FR), 46p., décembre 1996
L'ajout d'une connaissance dans une base de connaissance peut provoquer une inconsistance. La révision consiste alors à modifier la base pour la rendre consistante avec la dernière connaissance à ajouter. Résoudre ce problème est très utile dans l'assistance aux utilisateurs de bases de connaissance. Afin de poser les bases d'un tel mécanisme pour les objets, une représentation par objets minimale est formalisée. Elle est dotée de mécanismes d'inférence et d'une caractérisation syntaxique de l'inconsistance et de l'incohérence. La notion de base de connaissance révisée est définie sur ce langage. Un critère de minimalité, à la fois sémantique et syntaxique, permet de définir les bases révisées les plus proches de la base initiale.
Révision, minimisation des modifications, représentation de connaissance par objets
Jérôme Euzenat, A protocol for building consensual and consistent repositories, Research report 3260, INRIA Rhône-Alpes, Grenoble (FR), 46p., September 1997
Distributed collaborative construction of a repository (e.g. knowledge base, document, design description) requires tools enforcing the consistency of the repository and the agreement of all the collaborators on the content of the repository. The CO4 protocol presented herein manages the communication between collaborators in order to maintain these properties on a hierarchy of repositories. It mimics the submission of articles to peer-reviewed journals (except that each change must be accepted by all the participants). The protocol is independent from the nature of the repository and is based on a restricted set of message types. The communication between collaborators is described through a set of rules. The protocol is live, fair and maintains a consistent repository consensual among the collaborators.
Computer supported collaborative work, groupwork, knowledge sharing, negotiation, interaction protocol, knowledge communication, consensus
Masahiro Hori, Jérôme Euzenat, Peter Patel-Schneider, OWL Web Ontology Language XML Presentation Syntax, Note, Worldwide web consortium, Cambridge (MA US), 2003
This document describes an XML presentation syntax and XML Schemas for OWL 1.0 sublanguages: OWL Lite, OWL DL, and OWL Full. This document has been written to meet the requirement that OWL 1.0 should have an XML serialization syntax (R15 in [OWL Requirement]). It is not intended to be a normative specification. Instead, it represents a suggestion of one possible XML presentation syntax for OWL.
Faisal Alkhateeb, Jean-François Baget, Jérôme Euzenat, RDF with regular expressions, Research report 6191, INRIA Rhône-Alpes, Grenoble (FR), 32p., May 2007
RDF is a knowledge representation language dedicated to the annotation of resources within the framework of the semantic web. Among the query languages for querying an RDF knowledge base, some, such as SPARQL, are based on the formal semantics of RDF and the concept of semantic consequence, others, inspired by the work in databases, use regular expressions making it possible to search the paths in the graph associated with the knowledge base. In order to combine the expressivity of these two approaches, we define a mixed language, called PRDF (for "Paths RDF") in which the arcs of a graph can be labeled by regular expressions. We define the syntax and the semantics of these objects, and propose a correct and complete algorithm which, by a kind of homomorphism, calculates the semantic consequence between an RDF graph and a PRDF graph. This algorithm is the heart of query answering for the PSPARQL query language, the extension of the SPARQL query language which we propose and have implemented: a PSPARQL query allows to query an RDF knowledge base using graph patterns whose predicates are regular expressions.
semantic web, query language, RDF, SPARQL, regular expressions
Faisal Alkhateeb, Jean-François Baget, Jérôme Euzenat, Constrained regular expressions in SPARQL, Research report 6360, INRIA Rhône-Alpes, Grenoble (FR), 32p., October 2007
RDF is a knowledge representation language dedicated to the annotation of resources within the Semantic Web. Though RDF itself can be used as a query language for an RDF knowledge base (using RDF consequence), the need for added expressivity in queries has led to the definition of the SPARQL query language. SPARQL queries are defined on top of graph patterns that are basically RDF (and more precisely GRDF) graphs. To be able to characterize paths of arbitrary length in a query (e.g., "does there exist a trip from town A to town B using only trains and buses?"), we have already proposed the PRDF (for Path RDF) language, effectively mixing RDF reasonings with database-inspired regular paths. However, these queries do not allow expressing constraints on the internal nodes (e.g., "Moreover, one of the stops must provide a wireless connection."). To express these constraints, we present here an extension of RDF, called CPRDF (for Constrained paths RDF). For this extension of RDF, we provide an abstract syntax and an extension of RDF semantics. We characterize query answering (the query is a CPRDF graph, the knowledge base is an RDF graph) as a particular case of CPRDF entailment that can be computed using some kind of graph homomorphism. Finally, we use CPRDF graphs to generalize SPARQL graph patterns, defining the CPSPARQL extension of that query language, and prove that the problem of query answering using only CPRDF graphs is an NP-hard problem, and query answering thus remains a PSPACE-complete problem for CPSPARQL.
semantic web, query language, RDF, SPARQL, regular expressions
Manuel Atencia, Jérôme Euzenat, Marie-Christine Rousset, Exploiting ontologies and alignments for trust in semantic P2P networks, Research report 18, LIG, Grenoble (FR), 10p., June 2011
In a semantic P2P network, peers use separate ontologies and rely on alignments between their ontologies for translating queries. However, alignments may be limited unsound or incomplete and generate flawed translations, and thereby produce unsatisfactory answers. In this paper we propose a trust mechanism that can assist peers to select those in the network that are better suited to answer their queries. The trust that a peer has towards another peer is subject to a specific query and approximates the probability that the latter peer will provide a satisfactory answer. In order to compute trust, we exploit the information provided by peers' ontologies and alignments, along with the information that comes from peers' experience. Trust values are refined over time as more queries are sent and answers received, and we prove that these approximations converge.
semantic alignment, trust, probabilistic populated ontology
Melisachew Wudage Chekol, Jérôme Euzenat, Pierre Genevès, Nabil Layaïda, PSPARQL query containment, Research report 7641, INRIA, Grenoble (FR), 32p., June 2011
Querying the semantic web is mainly done through SPARQL. This language has been studied from different perspectives such as optimization and extension. One of its extensions, PSPARQL (Path SPARQL) provides queries with paths of arbitrary length. We study the static analysis of queries written in this language, in particular, containment of queries: determining whether, for any graph, the answers to a query are contained in those of another query. Our approach consists in encoding RDF graphs as transition systems and queries as mu-calculus formulas and then reducing the containment problem to testing satisfiability in the logic. We establish complexity bounds and report experimental results.
Query containment, PSPARQL, Semantic web, RDF, Regular path queries
François Scharffe, Jérôme Euzenat, MeLinDa: an interlinking framework for the web of data, Research report 7641, INRIA, Grenoble (FR), 21p., July 2011
The web of data consists of data published on the web in such a way that they can be interpreted and connected together. It is thus critical to establish links between these data, both for the web of data and for the semantic web that it contributes to feed. We consider here the various techniques developed for that purpose and analyze their commonalities and differences. We propose a general framework and show how the diverse techniques fit in the framework. From this framework we consider the relation between data interlinking and ontology matching. Although, they can be considered similar at a certain level (they both relate formal entities), they serve different purposes, but would find a mutual benefit at collaborating. We thus present a scheme under which it is possible for data linking tools to take advantage of ontology alignments.
Semantic web, Data interlinking, Instance matching, Ontology alignment, Web of data
Melisachew Wudage Chekol, Jérôme Euzenat, Pierre Genevès, Nabil Layaïda, A benchmark for semantic web query containment, equivalence and satisfiability, Research report 8128, INRIA, Grenoble (FR), 10p., July 2012
The problem of SPARQL query containment has recently attracted a lot of attention due to its fundamental purpose in query optimization and information integration. New approaches to this problem, have been put forth, that can be implemented in practice. However, these approaches suffer from various limitations: coverage (size and type of queries), response time (how long it takes to determine containment), and the technique applied to encode the problem. In order to experimentally assess implementation limitations, we designed a benchmark suite offering different experimental settings depending on the type of queries, projection and reasoning (RDFS). We have applied this benchmark to three available systems using different techniques highlighting the strengths and weaknesses of such systems.
Query containment, PSPARQL, Semantic web, RDF, Regular path queries
Faisal Alkhateeb, Jérôme Euzenat, Answering SPARQL queries modulo RDF Schema with paths, Research report 8394, INRIA Rhône-Alpes, Grenoble (FR), 46p., November 2013
SPARQL is the standard query language for RDF graphs. In its strict instantiation, it only offers querying according to the RDF semantics and would thus ignore the semantics of data expressed with respect to (RDF) schemas or (OWL) ontologies. Several extensions to SPARQL have been proposed to query RDF data modulo RDFS, i.e., interpreting the query with RDFS semantics and/or considering external ontologies. We introduce a general framework which allows for expressing query answering modulo a particular semantics in an homogeneous way. In this paper, we discuss extensions of SPARQL that use regular expressions to navigate RDF graphs and may be used to answer queries considering RDFS semantics. We also consider their embedding as extensions of SPARQL. These SPARQL extensions are interpreted within the proposed framework and their drawbacks are presented. In particular, we show that the PSPARQL query language, a strict extension of SPARQL offering transitive closure, allows for answering SPARQL queries modulo RDFS graphs with the same complexity as SPARQL through a simple transformation of the queries. We also consider languages which, in addition to paths, provide constraints. In particular, we present and compare nSPARQL and our proposal CPSPARQL. We show that CPSPARQL is expressive enough to answer full SPARQL queries modulo RDFS. Finally, we compare the expressiveness and complexity of both nSPARQL and the corresponding fragment of CPSPARQL, that we call cpSPARQL. We show that both languages have the same complexity through cpSPARQL, being a proper extension of SPARQL graph patterns, is more expressive than nSPARQL.
semantic web, query language, query modulo schema, RDF, RDF Schema, SPARQL, regular expression, Constrained regular expression, Path, PSPARQL, NSPARQL, CPSPARQL, cpSPARQL, nSPARQL
Jérôme Euzenat, The category of networks of ontologies, Research report 8652, INRIA, Grenoble (FR), 19p., December 2014
The semantic web has led to the deployment of ontologies on the web connected through various relations and, in particular, alignments of their vocabularies. There exists several semantics for alignments which make difficult interoperation between different interpretation of networks of ontologies. Here we present an abstraction of these semantics which allows for defining the notions of closure and consistency for networks of ontologies independently from the precise semantics. We also show that networks of ontologies with specific notions of morphisms define categories of networks of ontologies.
Inconsistency, Distributed system semantics, Category, Pullback, Network of ontologies, Ontology alignment, Alignment semantics
Jérôme Euzenat, Stepwise functional refoundation of relational concept analysis, Research report 9518, INRIA, Grenoble (FR), 68p., October 2023
Relational concept analysis (RCA) is an extension of formal concept analysis allowing to deal with several related contexts simultaneously. It has been designed for learning description logic theories from data and used within various applications. A puzzling observation about RCA is that it returns a single family of concept lattices although, when the data feature circular dependencies, other solutions may be considered acceptable. The semantics of RCA, provided in an operational way, does not shed light on this issue. In this report, we define these acceptable solutions as those families of concept lattices which belong to the space determined by the initial contexts (well-formed), cannot scale new attributes (saturated), and refer only to concepts of the family (self-supported). We adopt a functional view on the RCA process by defining the space of well-formed solutions and two functions on that space: one expansive and the other contractive. We show that the acceptable solutions are the common fixed points of both functions. This is achieved step-by-step by starting from a minimal version of RCA that considers only one single context defined on a space of contexts and a space of lattices. These spaces are then joined into a single space of context-lattice pairs, which is further extended to a space of indexed families of context-lattice pairs representing the objects manipulated by RCA. We show that RCA returns the least element of the set of acceptable solutions. In addition, it is possible to build dually an operation that generates its greatest element. The set of acceptable solutions is a complete sublattice of the interval between these two elements. Its structure and how the defined functions traverse it are studied in detail.
Formal Concept Analysis, Relational concept analysis, Fixed point, Fixed-point semantics, Circular dependency
Deliverables/Rapports de contrats
Jérôme Euzenat, Management of nonmonotonicity in knowledge base systems, Deliverable Z2.2/36-2, Laboratoire ARTEMIS, Grenoble (FR), 21p., November 1988
Jérôme Euzenat, Impact of nonmonotonicity on the management of objects on secondary storage, Deliverable Z2.2-3, Laboratoire ARTEMIS, Grenoble (FR), 37p., May 1989
After a review of the different ways to consider nonmonotonicity problems arising in knowledge bases as an extension of incompleteness problems in databases, this report will expose in details the implementation of a TMS as a cache consistency maintenance system as it was proposed in the previous report. The problems which stem from this implementation are discussed together with some solutions; they are the integrity constraint satisfaction problem and the secondary storage strategies to consider.
Jérôme Euzenat, Loïc Tricand de La Goute, Serveurs de connaissance et mémoire d'entreprise, Rapport d'activité final, INRIA Rhône-Alpes, Grenoble (FR), 10p., septembre 1997
Jérôme Euzenat, Loïc Tricand de La Goute, Serveurs de connaissance et mémoire d'entreprise, Rapport d'activité final, INRIA Rhône-Alpes, Grenoble (FR), 13p., septembre 1998
Farid Cerbah, Jérôme Euzenat, Intégration de connaissances modélisées et de connaissances textuelles: spécification d'un système d'aide à la pose de liens de traçabilité, Deliverable DGT 7672, Dassault aviation, Saint-Cloud (FR), 15p., avril 1999
Vincent Cligniez, Jérôme Euzenat, Yannick Manche, Raisonnement spatial pour l'intégration de modèles de simulation : Application aux avalanches, Rapport final, INRIA Rhône-Alpes/CEMAGREF Grenoble, Grenoble (FR), 23p., octobre 1999
Jérôme Euzenat, Intégration de connaissances modélisées et de connaissances textuelles : intégration objets-termes-textes via XML, Deliverable, Dassault aviation, Saint-Cloud (FR), 16p., septembre 1999
Jérôme Euzenat, Vers une plate-forme de diffusion de textes sur internet : étude préliminaire, Rapport de conseil, 63p., juin 2000
Jérôme Euzenat (ed), 1st international semantic web working symposium (SWWS-1), Deliverable 7.6, Ontoweb, 30p., September 2001
Jean-François Baget, Étienne Canaud, Jérôme Euzenat, Mohand Saïd-Hacid, Les langages du web sémantique, Rapport final, Action spécifique CNRS/STIC « Web sémantique », 2003
La manipulation des resources du web par des machines requiert l'expression ou la description de ces resources. Plusieurs langages sont donc définis à cet effet, ils doivent permettre d'exprimer données et méthadonnées (RDF, Cartes Topiques), de décrire les services et leur fonctionnement (UDDI, WSDL, DAML-S, etc.) et de disposer d'un modèle abstrait de ce qui est décrit grace à l'expression d'ontologies (RDFS, OWL). On présente ci-dessous l'état des travaux visant à doter le web sémantique de tels langages. On évoque aussi les questions importantes qui ne sont pas réglées à l'heure actuelle et qui méritent de plus amples travaux.
RDF, Cartes Topiques, RDFS, OWL, DAML+OIL, UDDI, WSDL, DAML-S, XL, XDD, Règles, Ontologies, Annotation, Sémantique, Inférence, Transformation, Robustesse
Jérôme Euzenat (ed), 1st International Semantic Web Conference (ISWC 2002), Deliverable 7.9, Ontoweb, 19p., January 2003
Jérôme Euzenat (ed), 2nd International Semantic Web Conference (ISWC 2003), Deliverable 7.11, Ontoweb, 21p., December 2003
Paolo Bouquet, Jérôme Euzenat, Enrico Franconi, Luciano Serafini, Giorgos Stamou, Sergio Tessaris, Specification of a common framework for characterizing alignment, Deliverable 2.2.1, Knowledge web, 21p., June 2004
Jérôme Euzenat, Thanh Le Bach, Jesús Barrasa, Paolo Bouquet, Jan De Bo, Rose Dieng-Kuntz, Marc Ehrig, Manfred Hauswirth, Mustafa Jarrar, Rubén Lara, Diana Maynard, Amedeo Napoli, Giorgos Stamou, Heiner Stuckenschmidt, Pavel Shvaiko, Sergio Tessaris, Sven Van Acker, Ilya Zaihrayeu, State of the art on ontology alignment, Deliverable 2.2.3, Knowledge web, 80p., June 2004
Jérôme Euzenat, Marc Ehrig, Raúl García Castro, Specification of a benchmarking methodology for alignment techniques, Deliverable 2.2.2, Knowledge web, 48p., December 2004
This document considers potential strategies for evaluating ontology alignment algorithms. It identifies various goals for such an evaluation. In the context of the Knowledge web network of excellence, the most important objective is the improvement of existing methods. We examine general evaluation strategies as well as efforts that have already been undergone in the specific field of ontology alignment. We then put forward some methodological and practical guidelines for running such an evaluation.
Wolf Siberski, Maud Cahuzac, Maria Del Carmen Suárez Figueroa, Rafael Gonzales Cabrero, Jérôme Euzenat, Shishir Garg, Jens Hartmann, Alain Léger, Diana Maynard, Jeff Pan, Pavel Shvaiko, Farouk Toumani, Software framework requirements analysis, Deliverable 1.2.2, Knowledge web, 59p., December 2004
Anna Zhdanova, Matteo Bonifacio, Stamatia Dasiopoulou, Jérôme Euzenat, Rose Dieng-Kuntz, Loredana Laera, David Manzano-Macho, Diana Maynard, Diego Ponte, Valentina Tamma, Specification of knowledge acquisition and modeling of the process of the consensus, Deliverable 2.3.2, Knowledge web, 92p., December 2004
In this deliverable, specification of knowledge acquisition and modeling of the process of consensus is provided.
Jérôme Euzenat, Loredana Laera, Valentina Tamma, Alexandre Viollet, Negociation/argumentation techniques among agents complying to different ontologies, Deliverable 2.3.7, Knowledge web, 43p., December 2005
This document presents solutions for agents using different ontologies, to negotiate the meaning of terms used. The described solutions are based on standard agent technologies as well as alignment techniques developed within Knowledge web. They can be applied for other interacting entities such as semantic web services.
Jérôme Euzenat, François Scharffe, Luciano Serafini, Specification of the delivery alignment format, Deliverable 2.2.6, Knowledge web, 46p., December 2005
This deliverable focusses on the definition of a delivery alignment format for tools producing alignments (mapping tools). It considers the many formats that are currently available for expressing alignments and evaluate them with regard to criteria that such formats would satisfy. It then proposes some improvements in order to produce a format satisfying more needs.
Pascal Hitzler, Jérôme Euzenat, Markus Krötzsch, Luciano Serafini, Heiner Stuckenschmidt, Holger Wache, Antoine Zimmermann, Integrated view and comparison of alignment semantics, Deliverable 2.2.5, Knowledge web, 32p., December 2005
We take a general perspective on alignment in order to develop common theoretical foundations for the subject. The deliverable comprises a comparative study of different mapping languages by means of distributed first-order logic, and a study on category-theoretical modelling of alignment and merging by means of pushout-combinations.
Heiner Stuckenschmidt, Marc Ehrig, Jérôme Euzenat, Andreas Hess, Willem Robert van Hage, Wei Hu, Ningsheng Jian, Gong Chen, Yuzhong Qu, George Stoilos, Giorgos Stamou, Umberto Straccia, Vojtech Svátek, Raphaël Troncy, Petko Valtchev, Mikalai Yatskevich, Description of alignment implementation and benchmarking results, Deliverable 2.2.4, Knowledge web, 87p., December 2005
This deliverable presents the evaluation campaign carried out in 2005 and the improvement participants to these campaign and others have to their systems. We draw lessons from this work and proposes improvements for future campaigns.
Jérôme Euzenat, Marc Ehrig, Anja Jentzsch, Malgorzata Mochol, Pavel Shvaiko, Case-based recommendation of matching tools and techniques, Deliverable 1.2.2.2.1, Knowledge web, 78p., December 2006
Choosing a matching tool adapted to a particular application can be very difficult. This document analyses the choice criteria from the application viewpoint and their fulfilment by the candidate matching systems. Different methods (paper analysis, questionnaire, empirical evaluation and decision making techniques) are used for assessing them. We evaluate how these criteria can be combined and how they can help particular users to decide in favour or against some matching system.
Jérôme Euzenat, Antoine Zimmermann, Marta Sabou, Mathieu d'Aquin, Matching ontologies for context, Deliverable 3.3.1, NeOn, 42p., 2007
Jérôme Euzenat, François Scharffe, Antoine Zimmermann, Expressive alignment language and implementation, Deliverable 2.2.10, Knowledge web, 60p., 2007
This deliverable provides the description of an alignment language which is both expressive and independent from ontology languages. It defines the language through its abstract syntax and semantics depending on ontology language semantics. It then describes two concrete syntax: an exchange syntax in RDF/XML and a surface syntax for human consumption. Finally, it presents the current implementation of this expressive language within the Alignment API taking advantage of the OMWG implementation.
François Scharffe, Jérôme Euzenat, Chan Le Duc, Pavel Shvaiko, Analysis of knowledge transformation and merging techniques and implementations, Deliverable 2.2.7, Knowledge web, 50p., December 2007
Dealing with heterogeneity requires finding correspondences between ontologies and using these correspondences for performing some action such as merging ontologies, transforming ontologies, translating data, mediating queries and reasoning with aligned ontologies. This deliverable considers this problem through the introduction of an alignment life cycle which also identifies the need for manipulating, storing and sharing the alignments before processing them. In particular, we also consider support for run time and design time alignment processing.
ontology alignment, alignment life cycle, alignment edition, ontology merging, ontoloy transformation, data translation, query mediation, reasoning, alignment support
Pavel Shvaiko, Jérôme Euzenat, Heiner Stuckenschmidt, Malgorzata Mochol, Fausto Giunchiglia, Mikalai Yatskevich, Paolo Avesani, Willem Robert van Hage, Ondřej Sváb, Vojtech Svátek, Description of alignment evaluation and benchmarking results, Deliverable 2.2.9, Knowledge web, 69p., 2007
Jérôme Euzenat, Jérôme David, Chan Le Duc, Marko Grobelnik, Bostjan Pajntar, Dunja Mladenic, Integration of OntoLight with the Alignment server, Deliverable 3.3.3, NeOn, 25p., 2008
This deliverable describes the integration of the OntoLight matcher within the Alignment server and the NeOn toolkit. This integration uses a web service connection from the Alignment server to an OntoLight web service interface.
Chan Le Duc, Mathieu d'Aquin, Jesús Barrasa, Jérôme David, Jérôme Euzenat, Raul Palma, Rosario Plaza, Marta Sabou, Boris Villazón-Terrazas, Matching ontologies for context: The NeOn Alignment plug-in, Deliverable 3.3.2, NeOn, 59p., 2008
This deliverable presents the software support provided by the NeOn toolkit for matching ontologies, and in particular, recontextualise them. This support comes through the NeOn Alignment plug-in which integrates the Alignment API and offers access to Alignment servers in the NeOn toolkit. We present the NeOn Alignment plug-in as well as several enhancements of the Alignment server: the integration of three matching methods developed within NeOn, i.e., Semantic Mapper, OLA and Scarlet, as well as theconnection of Alignment servers with Oyster.
Jérôme Euzenat, Carlo Allocca, Jérôme David, Mathieu d'Aquin, Chan Le Duc, Ondřej Sváb-Zamazal, Ontology distances for contextualisation, Deliverable 3.3.4, NeOn, 50p., 2009
Distances between ontologies are useful for searching, matching or visualising ontologies. We study the various distances that can be defined across ontologies and provide them in a NeOn toolkit plug-in, OntoSim, which is a library of distances that can be used for recontextualising.
Cássia Trojahn dos Santos, Jérôme Euzenat, Christian Meilicke, Heiner Stuckenschmidt, Evaluation design and collection of test data for matching tools, Deliverable 12.1, SEALS, 68p., November 2009
This deliverable presents a systematic procedure for evaluating ontology matching systems and algorithms, in the context of SEALS project. It describes the criteria and metrics on which the evaluations will be carried out and the characteristics of the test data to be used, as well as the evaluation target, which includes the systems generating the alignments for evaluation.
ontology matching, ontology alignment, evaluation, benchmarks, efficiency measure
Patrick Hoffmann, Mathieu d'Aquin, Jérôme Euzenat, Chan Le Duc, Marta Sabou, François Scharffe, Context-based matching revisited, Deliverable 3.3.5, NeOn, 39p., 2010
Matching ontologies can be achieved by first recontextualising ontologies and then using this context information in order to deduce the relations between ontology entities. In Deliverable 3.3.1, we introduced the Scarlet system which uses ontologies on the web as context for matching ontologies. In this deliverable, we push this further by systematising the parameterisation of Scarlet. We develop a framework for expressing context-based matching parameters and implement most of them within Scarlet. This allows for evaluating the impact of each of these parameters on the actual results of context-based matching.
Christian Meilicke, Cássia Trojahn dos Santos, Jérôme Euzenat, Services for the automatic evaluation of matching tools, Deliverable 12.2, SEALS, 35p., 2010
In this deliverable we describe a SEALS evaluation service for ontology matching that is based on the use of a web service interface to be implemented by the tool vendor. Following this approach we can offer an evaluation service before many components of the SEALS platform have been finished. We describe both the system architecture of the evaluation service from a general point of view as well as the specific components and their relation to the modules of the SEALS platform.
ontology matching, ontology alignment, evaluation, benchmarks
Cássia Trojahn dos Santos, Christian Meilicke, Jérôme Euzenat, Ondřej Sváb-Zamazal, Results of the first evaluation of matching tools, Deliverable 12.3, SEALS, 36p., November 2010
This deliverable reports the results of the first SEALS evaluation campaign, which has been carried out in coordination with the OAEI 2010 campaign. A subset of the OAEI tracks has been included in a new modality, the SEALS modality. From the participant's point of view, the main innovation is the use of a web-based interface for launching evaluations. 13 systems, out of 15 for all tracks, have participated in some of the three SEALS tracks. We report the preliminary results of these systems for each SEALS track and discuss the main lesson learned from to the use of the new technology for both participants and organizers of the OAEI.
ontology matching, ontology alignment, evaluation, benchmarks
Jérôme Euzenat, Nathalie Abadie, Bénédicte Bucher, Zhengjie Fan, Houda Khrouf, Michael Luger, François Scharffe, Raphaël Troncy, Dataset interlinking module, Deliverable 4.2, Datalift, 32p., 2011
This report presents the first version of the interlinking module for the Datalift platform as well as strategies for future developments.
data interlinking, linked data, instance matching
Cássia Trojahn dos Santos, Christian Meilicke, Jérôme Euzenat, Iterative implementation of services for the automatic evaluation of matching tools, Deliverable 12.5, SEALS, 21p., 2011
The implementation of the automatic services for evaluating matching tools follows an iterative model. The aim is to provide a way for continuously analysing and improving these services. In this deliverable, we report the first iteration of this process, i.e., current implementation status of the services. In this first iteration, we have extended our previous implementation in order to migrate our own services to the SEALS components, which have been finished since the end of the first evaluation campaign.
ontology matching, ontology alignment, evaluation, benchmarks, efficiency measure
José Luis Aguirre, Christian Meilicke, Jérôme Euzenat, Iterative implementation of services for the automatic evaluation of matching tools (v2), Deliverable 12.5v2, SEALS, 34p., 2012
This deliverable reports on the current status of the service implementation for the automatic evaluation of matching tools, and on the final status of those services. These services have been used in the third SEALS evaluation of matching systems, held in Spring 2012 in coordination with the OAEI 2011.5 campaign. We worked mainly on the tasks of modifying the WP12 BPEL work-flow to introduce new features introduced in the RES 1.2 version; testing the modified work-flows on a local installation and on the SEALS Platform; writing transformations of result data to be compliant with the new SEALS ontologies specifications; and finally, extending the SEALS client for ontology matching evaluation for better supporting the automation of WP12 evaluation campaigns and to advance in the integration with SEALS repositories. We report the results obtained while accomplishing these tasks.
ontology matching, ontology alignment, evaluation, benchmarks, efficiency measure
Jérôme David, Jérôme Euzenat, Maria Roşoiu, Mobile API for linked data, Deliverable 6.3, Datalift, 19p., 2012
This report presents a mobile API for manipulating linked data under the Android platform.
mobile, API, linked data, content provider
Christian Meilicke, José Luis Aguirre, Jérôme Euzenat, Ondřej Sváb-Zamazal, Ernesto Jiménez-Ruiz, Ian Horrocks, Cássia Trojahn dos Santos, Results of the second evaluation of matching tools, Deliverable 12.6, SEALS, 30p., 2012
This deliverable reports on the results of the second SEALS evaluation campaign (for WP12 it is the third evaluation campaign), which has been carried out in coordination with the OAEI 2011.5 campaign. Opposed to OAEI 2010 and 2011 the full set of OAEI tracks has been executed with the help of SEALS technology. 19 systems have participated and five data sets have been used. Two of these data sets are new and have not been used in previous OAEI campaigns. In this deliverable we report on the data sets used in the campaign, the execution of the campaign, and we present and discuss the evaluation results.
ontology matching, ontology alignment, evaluation, benchmarks
Zhengjie Fan, Thin Dong Ngoc Nguyen, Jérôme Euzenat, Fayçal Hamdi, François Scharffe, Dataset interlinking module, Deliverable 4.2, Datalift, 34p., 2013
This report presents the second version of the interlinking module for the
Datalift platform as well as strategies for future developments.
data interlinking, linked data, instance matching
Luz Maria Priego, Jérôme Euzenat, Raúl García Castro, María Poveda Villalón, Filip Radulovic, Mathias Weise, Strategy for Energy Management System Interoperability, Deliverable 2.1, Ready4SmartCities, 25p., December 2013
The goal of the Ready4SmartCities project is to support energy data interoperability in the context of SmartCities. It keeps a precise focus on building and urban data. Work package 2 is more specifically concerned with identifying the knowledge and data resources available or needed, that support energy management system interoperability. This deliverable defines the strategy to be used in WP2 for achieving its goal. It is made of two parts: identifying domains and stakeholders specific to the WP2 activity and the methodology used in WP2 and WP3.
Strahil Birov, Simon Robinson, María Poveda Villalón, Mari Carmen Suárez-Figueroa, Raúl García Castro, Jérôme Euzenat, Luz Maria Priego, Bruno Fies, Andrea Cavallaro, Jan Peters-Anders, Thanasis Tryferidis, Kleopatra Zoi Tsagkari, Ontologies and datasets for energy measurement and validation interoperability, Deliverable 3.2, Ready4SmartCities, 72p., September 2014
Andrea Cavallaro, Federico Di Gennaro, Jérôme Euzenat, Jan Peters-Anders, Anna Osello, Vision of energy systems for smart cities, Deliverable 5.2, Ready4SmartCities, 35p., November 2014
Raúl García Castro, María Poveda Villalón, Filip Radulovic, Asunción Gómez Pérez, Jérôme Euzenat, Luz Maria Priego, Georg Vogt, Simon Robinson, Strahil Birov, Bruno Fies, Jan Peters-Anders, Strategy for energy measurement and interoperability, Deliverable 3.1, Ready4SmartCities, 28p., January 2014
Mari Sepponen, Matti Hannus, Kalevi Piira, Andrea Cavallaro, Raúl García Castro, Bruno Fies, Thanasis Tryferidis, Kleopatra Zoi Tsagkari, Jérôme Euzenat, Florian Judex, Daniele Basciotti, Charlotte Marguerite, Ralf-Roman Schmidt, Strahil Birov, Simon Robinson, Georg Vogt, Draft of innovation and research roadmap, Deliverable 5.3, Ready4SmartCities, 47p., November 2014
Mathias Weise, María Poveda Villalón, Mari Carmen Suárez-Figueroa, Raúl García Castro, Jérôme Euzenat, Luz Maria Priego, Bruno Fies, Andrea Cavallaro, Jan Peters-Anders, Kleopatra Zoi Tsagkari, Ontologies and datasets for energy management system interoperability, Deliverable 2.2, Ready4SmartCities, 72p., October 2014
Strahil Birov, Simon Robinson, María Poveda Villalón, Mari Carmen Suárez-Figueroa, Raúl García Castro, Jérôme Euzenat, Bruno Fies, Andrea Cavallaro, Jan Peters-Anders, Thanasis Tryferidis, Kleopatra Zoi Tsagkari, Ontologies and datasets for energy measurement and validation interoperability, Deliverable 3.3, Ready4SmartCities, 135p., September 2015
Jérôme David, Jérôme Euzenat, Manuel Atencia, Language-independent link key-based data interlinking, Deliverable 4.1, Lindicle, 21p., March 2015
Links are important for the publication of RDF data on the web. Yet, establishing links between data sets is not an easy task. We develop an approach for that purpose which extracts weak link keys. Link keys extend the notion of a key to the case of different data sets. They are made of a set of pairs of properties belonging to two different classes. A weak link key holds between two classes if any resources having common values for all of these properties are the same resources. An algorithm is proposed to generate a small set of candidate link keys. Depending on whether some of the, valid or invalid, links are known, we define supervised and non supervised measures for selecting the appropriate link keys. The supervised measures approximate precision and recall, while the non supervised measures are the ratio of pairs of entities a link key covers (coverage), and the ratio of entities from the same data set it identifies (discrimination). We have experimented these techniques on two data sets, showing the accuracy and robustness of both approaches.
data interlinking, linked data, link key, candidate link key, coverage, dissimilarity
Jérôme Euzenat, Jérôme David, Angela Locoro, Armen Inants, Context-based ontology matching and data interlinking, Deliverable 3.1, Lindicle, 21p., July 2015
Context-based matching finds correspondences between entities from two ontologies by relating them to other resources. A general view of context-based matching is designed by analysing existing such matchers. This view is instantiated in a path-driven approach that (a) anchors the ontologies to external ontologies, (b) finds sequences of entities (path) that relate entities to match within and across these resources, and (c) uses algebras of relations for combining the relations obtained along these paths. Parameters governing such a system are identified and made explicit. We discuss the extension of this approach to data interlinking and its benefit to cross-lingual data interlinking. First, this extension would require an hybrid algebra of relation that combines relations between individual and classes. However, such an algebra may not be particularly useful in practice as only in a few restricted case it could conclude that two individuals are the same. But it can be used for finding mistakes in link sets.
Context-based data interlinking>, Multilingual data interlinking, Context-based ontology matching, Algebras of relations, Semantic web
Mari Hukkalainen, Matti Hannus, Kalevi Piira, Elina Grahn, Ha Hoang, Andrea Cavallaro, Raúl García Castro, Bruno Fies, Thanasis Tryferidis, Kleopatra Zoi Tsagkari, Jérôme Euzenat, Florian Judex, Daniele Basciotti, Charlotte Marguerite, Ralf-Roman Schmidt, Strahil Birov, Simon Robinson, Georg Vogt, Innovation and research roadmap, Deliverable 5.6, Ready4SmartCities, 63p., September 2015
Tatiana Lesnikova, Jérôme David, Jérôme Euzenat, Algorithms for cross-lingual data interlinking, Deliverable 4.2, Lindicle, 31p., June 2015
Linked data technologies enable to publish and link structured data on the Web. Although RDF is not about text, many RDF data providers publish their data in their own language. Cross-lingual interlinking consists of discovering links between identical resources across data sets in different languages. In this report, we present a general framework for interlinking resources in different languages based on associating a specific representation to each resource and computing a similarity between these representations. We describe and evaluate three methods using this approach: the two first methods are based on gathering virtual documents and translating them and the latter one represent them as bags of identifiers from a multilingual resource (BabelNet).
data interlinking, cross-lingual link discovery, owl:sameAs
Jan Peters-Anders, Mari Hukkalainen, Bruno Fies, Strahil Birov, Mathias Weise, Andrea Cavallaro, Jérôme Euzenat, Thanasis Tryferidis, Community description, Deliverable 1.4, Ready4SmartCities, 60p., August 2015
Mathias Weise, María Poveda Villalón, Raúl García Castro, Jérôme Euzenat, Luz Maria Priego, Bruno Fies, Andrea Cavallaro, Jan Peters-Anders, Kleopatra Zoi Tsagkari, Ontologies and datasets for energy management system interoperability, Deliverable 2.3, Ready4SmartCities, 149p., 2015
Adam Sanchez, Tatiana Lesnikova, Jérôme David, Jérôme Euzenat, Instance-level matching, Deliverable 3.2, Lindicle, 20p., September 2016
This paper describes precisely an ontology matching technique based on the extensional definition of a class as set of instances. It first provides a general characterisation of such techniques and, in particular the need to rely on links across data sets in order to compare instances. We then detail the implication intensity measure that has been chosen. The resulting algorithm is implemented and evaluated on XLore, DBPedia, LinkedGeoData and Geospecies.
Instance-based matching, Ontology alignments
Manuel Atencia, Jérôme Euzenat, Khadija Jradeh, Chan Le Duc, Tableau methods for reasoning with link keys, Deliverable 2.1, ELKER, 32p., 2019
Data interlinking is a critical task for widening and enhancing linked open data. One way to tackle data interlinking is to use link keys, which generalise keys to the case of two RDF datasets described using different ontologies. Link keys specify pairs of properties to compare for finding same-as links between instances of two classes of two different datasets. Hence, they can be used for finding links. Link keys can also be considered as logical axioms just like keys, ontologies and ontology alignments. We introduce the logic ALC+LK extending the description logic ALC with link keys. It may be used to reason and infer entailed link keys that may be more useful for a particular data interlinking task. We show that link key entailment can be reduced to consistency checking without introducing the negation of link keys. For deciding the consistency of an ALC+LK ontology, we introduce a new tableau-based algorithm. Contrary to the classical ones, the completion rules concerning link keys apply to pairs of individuals not directly related. We show that this algorithm is sound, complete and always terminates.
link keys, reasoning, tableau method
Manuel Atencia, Jérôme David, Jérôme Euzenat, Amedeo Napoli, Jérémy Vizzini, Candidate link key extraction with formal concept analysis, Deliverable 1.1, ELKER, 29p., October 2019
A link key extraction procedure using formal concept analysis is described. It is shown to extract all link key candidates.
Formal Concept Analysis, linked data, link key, data interlinking, Resource Description Framework
Manuel Atencia, Jérôme David, Jérôme Euzenat, Amedeo Napoli, Jérémy Vizzini, Relational concept analysis for circular link key extraction, Deliverable 1.2, ELKER, 57p., December 2021
A link key extraction procedure in case of circular dependencies is presented. It uses relational concept analysis and extends the procedure of Deliverable 1.1. This leads to investigate more closely the semantics of relational concept analysis which is given in terms of fixed points. Extracting all fixed points may offer more link key candidates to consider.
Formal Concept Analysis, Relational Concept Analysis, linked data, link key, data interlinking, Resource Description Framework
Master theses/Mémoires de DEA
Jérôme Euzenat, Un système de maintenance de la vérité pour une représentation de connaissance centrée-objet, Mémoire de DEA (master), INPG, Grenoble (FR), juin 1987
L'utilisation d'objets pour la représentation des connaissances est de plus en plus répandue. C'est dire l'importance que prend la conception de bases de connaissance centrées-objet qui peuvent être manipulés de manière non monotone par divers systèmes informatiques tant pour y opérer des modifications que des consultations.
On se propose d'étudier des mécanismes permettant à la fois plus d'efficacité et de cohérence dans l'utilisation d'une représentation centrée-objet. Le mécanisme de caching introduit des problèmes liés à l'utilisation non monotone de la base. Dans le but de palier ces problèmes, les différents systèmes de maintenance de la vérité existant sont étudiés.
Un cadre général permettant la coopération des mécanismes de "caching" et de maintenance de la vérité au sein d'une représentation centrée-objet est proposé. On présente ensuite une réalisation effective des propositions sur le système de gestion de bases de connaissance centrées-objet Shirka.
représentation centrée-objet, maintenance de la vérité, TMS, raisonnement non monotone, caching
Non reviewed articles/Articles non rapportés
Jérôme Euzenat, Christian Bessière (éds), Dossier 'Raisonnement temporel et spatial', Bulletin de l'AFIA 29:26-51, 1997
Jérôme Euzenat, Édition coopérative de bases de connaissance sur le worldwide web, Bulletin de l'AFIA 34:6-9, 1998
Dans ces quelques lignes on s'intéresse aux problèmes posés par l'édition de bases de connaissance sur le World-wide web (web dans la suite) et à présenter certaines solutions retenues. On considérera indifféremment la notion de base de connaissance et celle d'ontologie. Un encart présente les différents systèmes accessibles au public. Les problèmes d'indexation de sites ou d'aide à la recherche au moyen de bases de connaissance n'est pas traité ici.
Jérôme Euzenat, Contribution au débat 'évaluation scientifique: peut-on mieux faire en IA?', Bulletin de l'AFIA 37:21-22, 1999
Jérôme Euzenat, Research challenges and perspectives of the semantic web, IEEE Intelligent systems 17(5):86-88, 2002
IEEE Intelligent systems 17(5):86-88
Accessing documents and services on today's Web requires human intelligence. The interface to these documents and services is the Web page, written in natural language, which humans must understand and act upon. The paper discusses the Semantic Web which will augment the current Web with formalized knowledge and data that computers can process. In the future, some services will mix human-readable and structured data so that both humans and computers can use them. Others will support formalized knowledge that only machines will use.
Jérôme Euzenat, Les avancées du web sémantique (Qu'est-ce que le web sémantique?), Archimag(165):22-26, 2003
Archimag n°165
Pavel Shvaiko, Jérôme Euzenat, Ontology Matching, DLib magazine 12(11), 2005
D-Lib magazine 11(12)
Jérôme Euzenat, L'intelligence du web: l'information utile à portée de lien, Bulletin de l'AFIA 72:13-16, 2011
Motion pictures/Oeuvre audio-visuelle
Sandrine Dewez (réalisateur), Jérôme Euzenat (scénariste), Jérôme Euzenat, Corinne Lachaize (acteurs), Jérôme Euzenat (voix), Construction collaborative de bases de connaissance consensuelle, INRIA, Rocquencourt (FR), 4:20mn, 1998
Cette vidéo présente l'infrastructure CO4 qui permet à plusieurs intervenants de construire, à distance, une base de connaissance partagée. CO4 utilise un protocole de soumission de connaissance semblable à celui de l'évaluation par les pairs. Un intervenant soumet une proposition à la base partagée. Elle est transmise aux membres du groupe qui peuvent la tester, la modifier, l'accepter ou la rejeter. Un nouvel intervenant soumet sa candidature, le protocole gère alors son intégration au groupe de travail.
base de connaissances, réseau, travail
coopératif
Booklets and manuals/Manuels
Jérôme Euzenat, Le module de l'incertain de Smeci, Manuel de référence, Ilog, Gentilly (FR), 94p., juillet 1992
Projet Sherpa, Tropes 1.0, Reference manual, INRIA Rhône-Alpes, Grenoble (FR), 85p., June 1995
Projet Sherpa, Co4 1.0, Reference manual, INRIA Rhône-Alpes, Grenoble (FR), 35p., July 1998
Miscellaneous/Divers documents publics
Jérôme Euzenat, Maintien des croyances et bases de connaissance, application aux bases de connaissance centrées-objet, Laboratoire ARTEMIS, Grenoble (FR), 9p., mars 1988
Séminaire 'bases de données et de connaissances'
Après avoir défini le terme de base de connaissance, utilisé à la fois par les champs de recherche en l'intelligence artificielle et des bases de données, ce papier présente des réflexions et des travaux sur le thème de l'intégration d'un système de maintien des croyances dans une base de connaissance. Dans la perspective de grandes bases de connaissance - à la fois par la taille et par la durée de vie - la nécessité d'un mécanisme capable de garantir la validité du contenu de la base par rapport à un ensemble d'inférences semble inéluctable. Les systèmes de maintien des croyances développés pour les systèmes à base de règles sont candidats pour assurer cette tâche. Leur adaptation aux bases de connaissance, et en particulier au modèle centré-objet, est présentée au travers du système de représentation de connaissance Shirka.
Jérôme Euzenat (ed), Semantic web special issue, 36p., October 2002
ERCIM News n°51
Jérôme Euzenat, Personal information management and the semantic web, 3p., octobre 2002
Text for the SWAD-Europe workshop on semantic web calendaring
Jérôme Euzenat, Pas d'objets à sens unique!, 1p., mars 2005
Tract distributed at the 11th LMO conference, Bern (CH)
Line van den Berg, Jérôme Euzenat, The small Class? gamebook, Pedagogical material, 2022
Class? is an enjoyable card game aiming at grouping colourful cards into meaningful classes. It illustrates facets of reasoning with classifications. In order to introduce Class? progressively, this small gamebook provides a sequence of games before getting to the Class? game itself and beyond. The games are presented in increasing order of difficulty so that a game will benefit from mastering of previous ones.
Class?, Classification, Game
Jérôme Euzenat, Un nouvel algorithme de maintenance de la vérité, Rapport interne, Cognitech, Paris (FR), 18p., mai 1988
Ce rapport présente d'abord le fonctionnement général des systèmes de maintenance de la vérité. À partir de l'analyse détaillée des algorithmes proposés antérieurement, un nouvel algorithme reposant essentiellement sur la notion de noeuds influants, sur la validité d'une composante fortement connexe du graphe de dépendances, est décrit. Une critique de cet algorithme est finalement présentée.
Systèmes de maintenance de la vérité, TMS, graphe de dépendances, composante fortement connexe, rétrogression dirigée par les dépendances
Jérôme Euzenat, Iroise + TMS, utilisation, Rapport interne, Cognitech, Paris (FR), 10p., mai 1988
Jérôme Euzenat, Iroise + TMS, implémentation, Rapport interne, Cognitech, Paris (FR), 15p., mai 1988
Jérôme Euzenat, Un module TMS, version C0, Rapport interne, Cognitech, Paris (FR), 25p., 1988
On présente ici un module de l'AGC qui est un système de maintenance de la vérité conçu pour être interfaçable avec différents mécanismes d'inférence. Après une brève présentation des systèmes de maintenance de la vérité, celui qui est proposé est approfondi au travers d'un exemple avant que ne soient abordés les problèmes d'interfaçages proprement dit. Le guide d'interfaçage décrit deux types de liaisons: une liaison de bas niveau et une liaison de haut niveau. En annexe figure la liste des fichiers fournis avec le module ainsi que les fonctions qu'ils contienent, puis un ensemble de tests permettant d'aborder les point cruciaux de l'interface.
Jérôme Euzenat, Rétrogresser c'est progresser, Rapport interne, Laboratoire ARTEMIS, Grenoble (FR), 20p., janvier 1989
Jérôme Euzenat, Connexion Kool/RMS, spécifications, Rapport interne Sachem JE004, CEDIAG/Bull, Louveciennes (FR), 22p., septembre 1989
Jérôme Euzenat, Un algorithme de maintenance de la vérité tirant parti des composantes fortement connexes, Rapport interne, Laboratoire ARTEMIS, Grenoble (FR), 16p., décembre 1989
Jérôme Euzenat, Cache consistency in large object knowledge bases, Internal report, Laboratoire ARTEMIS, Grenoble (FR), 35p., September 1990
Laurent Buisson, Jérôme Euzenat, A quantitative analysis of reasoning for RMSes, Internal report, Laboratoire ARTEMIS, Grenoble (FR), 18p., January 1991
For reasoning systems, it is sometime useful to cache away the inferred values. Meanwhile, when the system works in a dynamic environment, cache coherence has to be performed, and this can be achieved with the help of a reasoning maintenance system (RMS). The questions to be answered, before implementing such a system for a particular application, are: how much is caching useful ? Does the system need a dynamicity management system ? Is a RMS suited (what will be its overhead) ?
We provide an application driven evaluation framework in order to answer these questions. The evaluation is not based on the intrinsic complexity of RMS but on the real work to be processed on the reasoning of the application. First, we express the action of caching and maintaining with two concepts: backward and forward cone effects. Then we quantify the inference time for those systems and find the quantification of the cone effects in the formulas.
As a consequence, the decision to use caching and/or RMS is expressed as a tradeoff between the advantages and disadvantages of both cone effects.
Reasoning maintenance systems, Inference caching, Spatial reasoning, Cone effect
Jérôme Euzenat, Martin Strecker, Forgetting abilities for space-bounded agents, Internal report, Laboratoire ARTEMIS, Grenoble (FR), 11p., August 1991
We propose a model of "agent" that has some characteristics at the crossroad of several ongoing research tracks: self rationality, autoepistemic reasoning, cooperative agents and resource-bounded reasoning. That model is particular since available technologies enable its implementation and thus its experimentation. Although in distributed artificial intelligence, the emphasis is on cooperation, we concentrate on belief management. We stress here the resource-bounded reasoning aspect of the work but describe first the architecture of our agents. We then describe the kind of behavior we expect from forgetting and show that this is achievable in both the theoretical and practical frameworks.
Resource-bounded reasoning, Belief revision, Autonomous agents
Jérôme Euzenat, Jean-François Puget, Utiliser les dépendances lors du retour-arrière dans Pecos, Rapport interne, Ilog, Gentilly (FR), 27p., octobre 1992
Le modèle d'exploration d'un espace de recherche utilisé par Pecos est le retour-arrière chronologique. Il consiste, lorsque l'on a détecté une inconsistance (le domaine d'une variable est vide), à revenir au dernier point de choix pour explorer les autres alternatives. Ce modèle d'exploration ne conserve pas les véritables raisons de l'inconsistance. La question que l'on se pose est celle d'exploiter ces dépendances afin d'explorer un nombre minimum d'alternatives dans tout le graphe.
Jérôme Euzenat, Modular constraint satisfaction, Internal report, IRIMAG, Grenoble (FR), 11p., October 1992
Modular constraint satisfaction organizes a constraint satisfaction problem (CSP) into a hierarchically linked set of modules. Using a modular description of a CSP brings the advantages of classical modular development methodology such as problem decomposition or incremental problem definition. A module can be seen as either a CSP or a constraint. Moreover, modular constraint satisfaction environments can be build on top of existing constraint satisfaction packages. Stating CSP in terms of modules does not bring any computational advantage in itself, but can help to state problems in a way that emphasizes the computational advantages of "tree clustered" CSP. Down and upward strategies are presented which allow to take into account, during the constraint solving process, the hierarchical structure of modular CSP. Moreover, modular CSP has been designed in order to implement dynamic CSP by grouping dynamic components into related clusters. This is shown through applications to configuration design and story understanding. Nevertheless, modular CSP is a first step toward generic modular CSP enabling to develop hierarchies of components which share the same interface.
Constraint satisfaction, Constraint programming languages, Modules, Dynamic CSP, Tree clustering
Cécile Capponi, Jérôme Euzenat, Jérôme Gensel, Objects, types and constraints as classification schemes, Internal report, INRIA Rhône-Alpes, Grenoble (FR), 20p., February 1994
The notion of classification scheme is a generic model that encompasses the kind of classification performed in many knowledge representation formalisms. Classification schemes abstract from the structure of individuals and consider only a sub-categorization relationship. The product of classification schemes preserves the status of classification scheme and provides various classification and categorization algorithms which rely on both the classification and the categorization defined in the members of the product. Object-based representation formalisms often use heterogeneous ways of representing knowledge. In the particular case of the system TROPES, knowledge is expressed by classes, types and constraints. Here is presented the way to express types and constraints in a type description module which provides them with the simple structure of classification schemes. This mapping allows the integration into TROPES of new types and constraints together with their sub-typing relation. Afterwards, taxonomies of classes are themselves considered to be classification schemes which are product of more primitive ones. Then, this information is sufficient for classifying TROPES objects.
Class, object, type, constraint, classification scheme, sub-type inference
Jérôme Euzenat, Sur la sémantique des actes de langage artificiels (remarques préliminaires), Internal report, INRIA Rhône-Alpes, Grenoble (FR), 13p., novembre 1995
On tente naïvement de se poser quelques questions concernant la sémantique des langages "universels" d'expressions d'actes de langage, c'est-à-dire de langage destinés à assurer l'inter-opérabilité d'agents logiciels hétérogènes. L'un des problèmes soulevés par les tentatives de formalisation actuelles est leur présupposé sur les agents qui interagissent. Or, si l'on désire l'inter-opérabilité, il faut que les messages puissent être interprétés de manière satisfaisante par toutes sortes d'agents: des agents très intelligents et des agents simplets, des agents sincères et altruistes et des agents menteurs et cupides. Il n'est donc pas immédiat d'appliquer les formules qui fonctionnent bien pour l'analyse d'un dialogue, l'analyse d'un protocole ou l'analyse de la manière de dialoguer d'un sujet avec un autre à un langage "universel". Un début de proposition est fait au travers de la notion de protocole affiché.