Lexical Markup Framework

Language resource management - Lexical markup framework (LMF; ISO 24613:2008), is the ISO International Organization for Standardization ISO/TC37 standard for natural language processing (NLP) and machine-readable dictionary (MRD) lexicons.[1] The scope is standardization of principles and methods relating to language resources in the contexts of multilingual communication and cultural diversity.

Objectives

The goals of LMF are to provide a common model for the creation and use of lexical resources, to manage the exchange of data between and among these resources, and to enable the merging of large number of individual electronic resources to form extensive global electronic resources.

Types of individual instantiations of LMF can include monolingual, bilingual or multilingual lexical resources. The same specifications are to be used for both small and large lexicons, for both simple and complex lexicons, for both written and spoken lexical representations. The descriptions range from morphology, syntax, computational semantics to computer-assisted translation. The covered languages are not restricted to European languages but cover all natural languages. The range of targeted NLP applications is not restricted. LMF is able to represent most lexicons, including WordNet, EDR and PAROLE lexicons.

History

In the past, lexicon standardization has been studied and developed by a series of projects like GENELEX, EDR, EAGLES, MULTEXT, PAROLE, SIMPLE and ISLE. Then, the ISO/TC37 National delegations decided to address standards dedicated to NLP and lexicon representation. The work on LMF started in Summer 2003 by a new work item proposal issued by the US delegation. In Fall 2003, the French delegation issued a technical proposition for a data model dedicated to NLP lexicons. In early 2004, the ISO/TC37 committee decided to form a common ISO project with Nicoletta Calzolari (CNR-ILC Italy) as convenor and Gil Francopoulo (Tagmatica France) and Monte George (ANSI USA) as editors. The first step in developing LMF was to design an overall framework based on the general features of existing lexicons and to develop a consistent terminology to describe the components of those lexicons. The next step was the actual design of a comprehensive model that best represented all of the lexicons in detail. A large panel of 60 experts contributed a wide range of requirements for LMF that covered many types of NLP lexicons. The editors of LMF worked closely with the panel of experts to identify the best solutions and reach a consensus on the design of LMF. Special attention was paid to the morphology in order to provide powerful mechanisms for handling problems in several languages that were known as difficult to handle. 13 versions have been written, dispatched (to the National nominated experts), commented and discussed during various ISO technical meetings. After five years of work, including numerous face-to-face meetings and e-mail exchanges, the editors arrived at a coherent UML model. In conclusion, LMF should be considered a synthesis of the state of the art in NLP lexicon field.

Current stage

The ISO number is 24613. The LMF specification has been published officially as an International Standard on 17 November 2008.

As one of the members of the ISO/TC37 family of standards

The ISO/TC37 standards are currently elaborated as high level specifications and deal with word segmentation (ISO 24614), annotations (ISO 24611 a.k.a. MAF, ISO 24612 a.k.a. LAF, ISO 24615 a.k.a. SynAF, and ISO 24617-1 a.k.a. SemAF/Time), feature structures (ISO 24610), multimedia containers (ISO 24616 a.k.a. MLIF), and lexicons (ISO 24613). These standards are based on low level specifications dedicated to constants, namely data categories (revision of ISO 12620), language codes (ISO 639), scripts codes (ISO 15924), country codes (ISO 3166) and Unicode (ISO 10646).

The two level organization forms a coherent family of standards with the following common and simple rules:

  • the high level specification provides structural elements that are adorned by the standardized constants;
  • the low level specifications provide standardized constants as metadata.

Key standards

The linguistics constants like /feminine/ or /transitive/ are not defined within LMF but are recorded in the Data Category Registry (DCR) that is maintained as a global resource by ISO/TC37 in compliance with ISO/IEC 11179-3:2003.[2] And these constants are used to adorn the high level structural elements.

The LMF specification complies with the modeling principles of Unified Modeling Language (UML) as defined by Object Management Group (OMG). The structure is specified by means of UML class diagrams. The examples are presented by means of UML instance (or object) diagrams.

An XML DTD is given in an annex of the LMF document.

Model structure

LMF is composed of the following components:

  • The core package that is the structural skeleton which describes the basic hierarchy of information in a lexical entry.
  • Extensions of the core package which are expressed in a framework that describes the reuse of the core components in conjunction with the additional components required for a specific lexical resource.

The extensions are specifically dedicated to morphology, MRD, NLP syntax, NLP semantics, NLP multilingual notations, NLP morphological patterns, multiword expression patterns, and constraint expression patterns.

Example

In the following example, the lexical entry is associated with a lemma clergyman and two inflected forms clergyman and clergymen. The language coding is set for the whole lexical resource. The language value is set for the whole lexicon as shown in the following UML instance diagram.

LMFMorphoClergymanInflected

LMFMorphoClergymanInflected

The elements Lexical Resource, Global Information, Lexicon, Lexical Entry, Lemma, and Word Form define the structure of the lexicon. They are specified within the LMF document. On the contrary, languageCoding, language, partOfSpeech, commonNoun, writtenForm, grammaticalNumber, singular, plural are data categories that are taken from the Data Category Registry. These marks adorn the structure. The values ISO 639-3, clergyman, clergymen are plain character strings. The value eng is taken from the list of languages as defined by ISO 639-3.

With some additional information like dtdVersion and feat, the same data can be expressed by the following XML fragment:

<LexicalResource dtdVersion="15">
    <GlobalInformation>
        <feat att="languageCoding" val="ISO 639-3"/>
    </GlobalInformation>
    <Lexicon>
        <feat att="language" val="eng"/>
        <LexicalEntry>
            <feat att="partOfSpeech" val="commonNoun"/>
            <Lemma>
                <feat att="writtenForm" val="clergyman"/>
            </Lemma>
            <WordForm>
                 <feat att="writtenForm" val="clergyman"/>
                 <feat att="grammaticalNumber" val="singular"/>
            </WordForm>
            <WordForm>
                <feat att="writtenForm" val="clergymen"/>
                <feat att="grammaticalNumber" val="plural"/>
            </WordForm>
        </LexicalEntry>
    </Lexicon>
</LexicalResource>

This example is rather simple, while LMF can represent much more complex linguistic descriptions the XML tagging is correspondingly complex.

Selected publications about LMF

The first publication about the LMF specification as it has been ratified by ISO (this paper became (in 2015) the 9th most cited paper within the Language Resources and Evaluation conferences from LREC papers):

  • Language Resources and Evaluation LREC-2006/Genoa: Gil Francopoulo, Monte George, Nicoletta Calzolari, Monica Monachini, Nuria Bel, Mandy Pet, Claudia Soria: Lexical Markup Framework (LMF) [3]

About semantic representation:

  • Gesellschaft für linguistische Datenverarbeitung GLDV-2007/Tübingen: Gil Francopoulo, Nuria Bel, Monte George Nicoletta Calzolari, Monica Monachini, Mandy Pet, Claudia Soria: Lexical Markup Framework ISO standard for semantic information in NLP lexicons [4]

About African languages:

  • Traitement Automatique des langues naturelles, Marseille, 2014: Mouhamadou Khoule, Mouhamad Ndiankho Thiam, El Hadj Mamadou Nguer: Toward the establishment of a LMF-based Wolof language lexicon (Vers la mise en place d'un lexique basé sur LMF pour la langue wolof) [in French][5]

About Asian languages:

  • Lexicography, Journal of ASIALEX, Springer 2014: Lexical Markup Framework: Gil Francopoulo, Chu-Ren Huang: An ISO Standard for Electronic Lexicons and its Implications for Asian Languages DOI 10.1007/s40607-014-0006-z

About European languages:

  • COLING 2010: Verena Henrich, Erhard Hinrichs: Standardizing Wordnets in the ISO Standard LMF: Wordnet-LMF for GermaNet [6]
  • EACL 2012: Judith Eckle-Kohler, Iryna Gurevych: Subcat-LMF: Fleshing out a standardized format for subcategorization frame interoperability [7]
  • EACL 2012: Iryna Gurevych, Judith Eckle-Kohler, Silvana Hartmann, Michael Matuschek, Christian M Meyer, Christian Wirth: UBY - A Large-Scale Unified Lexical-Semantic Resource Based on LMF.[8]

About Semitic languages:

  • Journal of Natural Language Engineering, Cambridge University Press (to appear in Spring 2015): Aida Khemakhem, Bilel Gargouri, Abdelmajid Ben Hamadou, Gil Francopoulo: ISO Standard Modeling of a large Arabic Dictionary.
  • Proceedings of the seventh Global Wordnet Conference 2014: Nadia B M Karmani, Hsan Soussou, Adel M Alimi: Building a standardized Wordnet in the ISO LMF for aeb language.[9]
  • Proceedings of the workshop: HLT & NLP within Arabic world, LREC 2008: Noureddine Loukil, Kais Haddar, Abdelmajid Ben Hamadou: Towards a syntactic lexicon of Arabic Verbs.[10]
  • Traitement Automatique des Langues Naturelles, Toulouse (in French) 2007: Khemakhem A, Gargouri B, Abdelwahed A, Francopoulo G: Modélisation des paradigmes de flexion des verbes arabes selon la norme LMF-ISO 24613.[11]

Dedicated book

There is a book published in 2013: LMF Lexical Markup Framework[12] which is entirely dedicated to LMF. The first chapter deals with the history of lexicon models, the second chapter is a formal presentation of the data model and the third one deals with the relation with the data categories of the ISO-DCR. The other 14 chapters deal with a lexicon or a system, either in the civil or military domain, either within scientific research labs or for industrial applications. These are Wordnet-LMF, Prolmf, DUELME, UBY-LMF, LG-LMF, RELISH, GlobalAtlas (or Global Atlas) and Wordscape.

Related scientific communications

See also

References

  1. ^ "ISO 24613:2008 - Language resource management - Lexical markup framework (LMF)". Iso.org. Retrieved 2016-01-24.
  2. ^ a b "The relevance of standards for research infrastructures" (PDF). Hal.inria.fr. Retrieved 2016-01-24.
  3. ^ "Lexical Markup Framework (LMF)" (PDF). Hal.inria.fr. Retrieved 2016-01-24.
  4. ^ "Lexical markup framework (LMF) for NLP multilingual resources" (PDF). Hal.inria.fr. Retrieved 2016-01-24.
  5. ^ "Vers la mise en place d'un lexique basé sur LMF pour la langue Wolof" (PDF). Aclweb.org. Retrieved 2016-01-24.
  6. ^ "Standardizing Wordnets in the ISO Standard LMF: Wordnet-LMF for GermaNet" (PDF). Aclweb.org. Retrieved 2016-01-24.
  7. ^ "Subcat-LMF: Fleshing out a standardized format for subcategorization frame interoperability" (PDF). Aclweb.org. Retrieved 2016-01-24.
  8. ^ "UBY – A Large-Scale Unified Lexical-Semantic Resource Based on LMF" (PDF). Aclweb.org. Retrieved 2016-01-24.
  9. ^ "Building a standardized Wordnet in the ISO LMF for aeb language" (PDF). Aclweb.org. Retrieved 2016-01-24.
  10. ^ "LREC 2008 Proceedings". Lrec-conf.org. Retrieved 2016-01-24.
  11. ^ "Modélisation des paradigmes de flexion des verbes arabes selon la norme LMF - ISO 24613" (PDF). Aclweb.org. Retrieved 2016-01-24.
  12. ^ Gil Francopoulo (edited by) LMF Lexical Markup Framework, ISTE / Wiley 2013 (ISBN 978-1-84821-430-9)

External links

Bilingual dictionary

A bilingual dictionary or translation dictionary is a specialized dictionary used to translate words or phrases from one language to another. Bilingual dictionaries can be unidirectional, meaning that they list the meanings of words of one language in another, or can be bidirectional, allowing translation to and from both languages. Bidirectional bilingual dictionaries usually consist of two sections, each listing words and phrases of one language alphabetically along with their translation. In addition to the translation, a bilingual dictionary usually indicates the part of speech, gender, verb type, declension model and other grammatical clues to help a non-native speaker use the word. Other features sometimes present in bilingual dictionaries are lists of phrases, usage and style guides, verb tables, maps and grammar references. In contrast to the bilingual dictionary, a monolingual dictionary defines words and phrases instead of translating them.

Computational lexicology

Computational lexicology is a branch of computational linguistics, which is concerned with the use of computers in the study of lexicon. It has been more narrowly described by some scholars (Amsler, 1980) as the use of computers in the study of machine-readable dictionaries. It is distinguished from computational lexicography, which more properly would be the use of computers in the construction of dictionaries, though some researchers have used computational lexicography as synonymous.

Dictionary

A dictionary, sometimes known as a wordbook, is a collection of words in one or more specific languages, often arranged alphabetically (or by radical and stroke for ideographic languages), which may include information on definitions, usage, etymologies, pronunciations, translation, etc. or a book of words in one language with their equivalents in another, sometimes known as a lexicon. It is a lexicographical reference that shows inter-relationships among the data.A broad distinction is made between general and specialized dictionaries. Specialized dictionaries include words in specialist fields, rather than a complete range of words in the language. Lexical items that describe concepts in specific fields are usually called terms instead of words, although there is no consensus whether lexicology and terminology are two different fields of study. In theory, general dictionaries are supposed to be semasiological, mapping word to definition, while specialized dictionaries are supposed to be onomasiological, first identifying concepts and then establishing the terms used to designate them. In practice, the two approaches are used for both types. There are other types of dictionaries that do not fit neatly into the above distinction, for instance bilingual (translation) dictionaries, dictionaries of synonyms (thesauri), and rhyming dictionaries. The word dictionary (unqualified) is usually understood to refer to a general purpose monolingual dictionary.There is also a contrast between prescriptive or descriptive dictionaries; the former reflect what is seen as correct use of the language while the latter reflect recorded actual use. Stylistic indications (e.g. "informal" or "vulgar") in many modern dictionaries are also considered by some to be less than objectively descriptive.Although the first recorded dictionaries date back to Sumerian times (these were bilingual dictionaries), the systematic study of dictionaries as objects of scientific interest themselves is a 20th-century enterprise, called lexicography, and largely initiated by Ladislav Zgusta. The birth of the new discipline was not without controversy, the practical dictionary-makers being sometimes accused by others of "astonishing" lack of method and critical-self reflection.

Dynamic and formal equivalence

Dynamic equivalence and formal equivalence, terms coined by Eugene Nida, are two dissimilar translation approaches, achieving differing level of literalness between the source text and the target text, as employed in biblical translation.

The two have been understood basically, with dynamic equivalence as sense-for-sense translation (translating the meanings of phrases or whole sentences) with readability in mind, and with formal equivalence as word-for-word translation (translating the meanings of words and phrases in a more literal way) keeping literal fidelity.

Helen Aristar-Dry

Helen Aristar-Dry is an American linguist who currently serves as the series editor for SpringerBriefs in Linguistics. Most notably, from 1991 to 2013 she co-directed The LINGUIST List with Anthony Aristar. She has served as Principal Investigator or co-Principal Investigator on over $5,000,000 worth of research grants from the National Science Foundation and the National Endowment for the Humanities. She retired as Professor of English Language and Literature from Eastern Michigan University in 2013.

ISO 12620

ISO 12620 is a standard from ISO/TC 37 which defines a Data Category Registry, a registry for registering linguistic terms used in various fields of translation, computational linguistics and natural language processing and defining mappings both between different terms and the same terms used in different systems.

The goal of the registry is that new systems can reuse existing terminology, or at least be easily mapped to existing terminology, to aid interoperability. To this end a number of terminologies have been added to the registry, including ones based on the General Ontology for Linguistic Description, the National Corpus of Polish and the TermBase eXchange from the Localization Industry Standards Association.

The standard was first released as ISO 12620:1999 which was rendered obsolete by ISO 12620:2009. The first edition was English-only, the second bilingual English-French.

The standard is relatively low-level but is used by other standards such as Lexical Markup Framework (ISO 24613:2008).

ISO 639-3

ISO 639-3:2007, Codes for the representation of names of languages – Part 3: Alpha-3 code for comprehensive coverage of languages, is an international standard for language codes in the ISO 639 series. It defines three-letter codes for identifying languages. The standard was published by ISO on 1 February 2007.ISO 639-3 extends the ISO 639-2 alpha-3 codes with an aim to cover all known natural languages. The extended language coverage was based primarily on the language codes used in the Ethnologue (volumes 10-14) published by SIL International, which is now the registration authority for ISO 639-3. It provides an enumeration of languages as complete as possible, including living and extinct, ancient and constructed, major and minor, written and unwritten. However, it does not include reconstructed languages such as Proto-Indo-European.ISO 639-3 is intended for use as metadata codes in a wide range of applications. It is widely used in computer and information systems, such as the Internet, in which many languages need to be supported. In archives and other information storage, they are used in cataloging systems, indicating what language a resource is in or about. The codes are also frequently used in the linguistic literature and elsewhere to compensate for the fact that language names may be obscure or ambiguous.

LMF

LMF may refer to:

"Lack of Moral Fibre", RAF World War II designation for air crew unwilling to fly

Lamb-Mössbauer factor, in solid-state spectroscopy

Lazy Mutha Fucka, a Cantonese hip-hop band from Hong Kong

Leobersdorfer Maschinenfabrik, machine factory that produced first Diesel motors in Austria

Lexical Markup Framework, the ISO standard for lexicons

Licentiate in Medicine and Surgery, a medical degree.

Linked Media Framework, predecessor of Apache Marmotta

Lemma (morphology)

In morphology and lexicography, a lemma (plural lemmas or lemmata) is the canonical form, dictionary form, or citation form of a set of words (headword). In English, for example, run, runs, ran and running are forms of the same lexeme, with run as the lemma. Lexeme, in this context, refers to the set of all the forms that have the same meaning, and lemma refers to the particular form that is chosen by convention to represent the lexeme. In lexicography, this unit is usually also the citation form or headword by which it is indexed. Lemmas have special significance in highly inflected languages such as Arabic, Turkish and Russian. The process of determining the lemma for a given word is called lemmatisation. The lemma can be viewed as the chief of the principal parts, although lemmatisation is at least partly arbitrary.

Lexical resource

A lexical resource (LR) is a database consisting of one or several dictionaries.

Depending on the type of languages that are addressed, the LR may be qualified as monolingual, bilingual or multilingual. For bilingual and multilingual LRs, the words may be connected or not connected, from a language to another. When connected, the equivalence from a language to another, is performed through a bilingual link (for bilingual LRs) or through multilingual notations (for multilingual LRs).

It is possible also to build and manage a lexical resource consisting of different lexicons of the same language, for instance, one dictionary for general words and one or several dictionaries for different specialized domains.

Lexicology

Lexicology is the part of linguistics that studies words. This may include

their nature and function as symbols, their meaning, the relationship of their meaning to epistemology in general, and the rules of their composition from smaller elements (morphemes such as the English -ed marker for past or un- for negation; and phonemes as basic sound units).

Lexicology also involves relations between words, which may involve semantics (for example, love vs. affection), derivation (for example, fathom vs. unfathomably), use and sociolinguistic distinctions (for example, flesh vs. meat), and any other issues involved in analyzing the whole lexicon of a language.

The term first appeared in the 1970s, though there were lexicologists in essence before the term was coined. Computational lexicology is a related field (in the same way that computational linguistics is related to linguistics) that deals with the computational study of dictionaries and their contents.

An allied science to lexicology is lexicography, which also studies words, but primarily in relation with dictionaries – it is concerned with the inclusion of words in dictionaries and from that perspective with the whole lexicon. Sometimes lexicography is considered to be a part or a branch of lexicology, but properly speaking, only lexicologists who actually write dictionaries are lexicographers. Some consider this a distinction of theory vs. practice.

Lexicon

A lexicon, word-hoard, wordbook, or word-stock is the vocabulary of a person, language, or branch of knowledge (such as nautical or medical). In linguistics, a lexicon is a language's inventory of lexemes. The word "lexicon" derives from the Greek λεξικόν (lexicon), neuter of λεξικός (lexikos) meaning "of or for words."Linguistic theories generally regard human languages as consisting of two parts: a lexicon, essentially a catalogue of a language's words (its wordstock); and a grammar, a system of rules which allow for the combination of those words into meaningful sentences. The lexicon is also thought to include bound morphemes, which cannot stand alone as words (such as most affixes). In some analyses, compound words and certain classes of idiomatic expressions and other collocations are also considered to be part of the lexicon. Dictionaries represent attempts at listing, in alphabetical order, the lexicon of a given language; usually, however, bound morphemes are not included.

Machine-readable dictionary

Machine-readable dictionary (MRD) is a dictionary stored as machine (computer) data instead of being printed on paper. It is an electronic dictionary and lexical database.

A machine-readable dictionary is a dictionary in an electronic form that can be loaded in a database and can be queried via application software. It may be a single language explanatory dictionary or a multi-language dictionary to support translations between two or more languages or a combination of both. Translation software between multiple languages usually apply bidirectional dictionaries. An MRD may be a dictionary with a proprietary structure that is queried by dedicated software (for example online via internet) or it can be a dictionary that has an open structure and is available for loading in computer databases and thus can be used via various software applications. Conventional dictionaries contain a lemma with various descriptions. A machine-readable dictionary may have additional capabilities and is therefore sometimes called a smart dictionary. An example of a smart dictionary is the Open Source Gellish English dictionary.

The term dictionary is also used to refer to an electronic vocabulary or lexicon as used for example in spelling checkers. If dictionaries are arranged in a subtype-supertype hierarchy of concepts (or terms) then it is called a taxonomy. If it also contains other relations between the concepts, then it is called an ontology. Search engines may use either a vocabulary, a taxonomy or an ontology to optimise the search results. Specialised electronic dictionaries are morphological dictionaries or syntactic dictionaries.

The term MRD is often contrasted with NLP dictionary, in the sense that an MRD is the electronic form of a dictionary which was printed before on paper. Although being both used by programs, in contrast, the term NLP dictionary is preferred when the dictionary was built from scratch with NLP in mind. An ISO standard for MRD and NLP is able to represent both structures and is called Lexical Markup Framework.

Morphological pattern

A morphological pattern is a set of associations and/or operations that build the various forms of a lexeme, possibly by inflection, agglutination, compounding or derivation.

Multilingual notation

A multilingual notation is a representation in a lexical resource that allows the translation between two or more words.

UBY

UBY is a large-scale lexical-semantic resource for natural language processing (NLP) developed at the Ubiquitous Knowledge Processing Lab (UKP) in the department of Computer Science of the Technische Universität Darmstadt .

UBY is based on the ISO standard Lexical Markup Framework (LMF) and combines information from several expert-constructed and collaboratively constructed resources for English and German.

UBY applies a word sense alignment approach (subfield of word sense disambiguation) for combining information about nouns and verbs.

Currently, UBY contains 12 integrated resources in English and German.

UBY-LMF

UBY-LMF is a format for standardizing lexical resources for Natural Language Processing (NLP). UBY-LMF

conforms to the ISO standard for lexicons: LMF, designed within the ISO-TC37, and constitutes a so-called serialization of this abstract standard. In accordance with the LMF, all attributes and other linguistic terms introduced in UBY-LMF refer to standardized descriptions of their meaning in ISOCat.

UBY-LMF has been implemented in Java and is actively developed as an Open Source project on Google Code.

Based on this Java implementation, the large scale electronic lexicon UBY has automatically been created - it is the result of using UBY-LMF to standardize a range of diverse lexical resources frequently used for NLP applications.

In 2013, UBY contains 10 lexicons which are pairwise interlinked at the sense level:

English WordNet, Wiktionary, Wikipedia, FrameNet, VerbNet, OmegaWiki

German Wiktionary, Wikipedia, GermaNet, IMSLex-Subcat and

multilingual OmegaWiki.A subset of lexicons integrated in UBY have been converted to a Semantic Web format according to the lemon lexicon model. This conversion is based on a mapping of UBY-LMF to the lemon lexicon model.

WordNet

WordNet is a lexical database for the English language. It groups English words into sets of synonyms called synsets, provides short definitions and usage examples, and records a number of relations among these synonym sets or their members. WordNet can thus be seen as a combination of dictionary and thesaurus. While it is accessible to human users via a web browser, its primary use is in automatic text analysis and artificial intelligence applications. The database and software tools have been released under a BSD style license and are freely available for download from the WordNet website. Both the lexicographic data (lexicographer files) and the compiler (called grind) for producing the distributed database are available.

ISO standards by standard number
1–9999
10000–19999
20000+

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.