Generative grammar

Generative grammar is a linguistic theory that regards grammar as a system of rules that generates exactly those combinations of words that form grammatical sentences in a given language. Noam Chomsky first used the term in relation to the theoretical linguistics of grammar that he developed in the late 1950s.[1] Linguists who follow the generative approach have been called generativists. The generative school has focused on the study of syntax and addressed other aspects of a language's structure, including morphology and phonology.

Early versions of Chomsky's theory were called transformational grammar, a term still used to include his subsequent theories,[2] the most recent of which is the minimalist program theory: Chomsky and other generativists have argued that many of the properties of a generative grammar arise from a universal grammar that is innate to the human brain, rather than being learned from the environment (see the poverty of the stimulus argument).

There are a number of versions of generative grammar currently practiced within linguistics.

A contrasting approach is that of constraint-based grammars. Where a generative grammar attempts to list all the rules that result in all well-formed sentences, constraint-based grammars allow anything that is not otherwise constrained. Certain versions of dependency grammar, head-driven phrase structure grammar, lexical functional grammar, categorial grammar, relational grammar, link grammar, and tree-adjoining grammar are constraint-based grammars that have been proposed. In stochastic grammar, grammatical correctness is taken as a probabilistic variable, rather than a discrete (yes or no) property.

Frameworks

There are a number of different approaches to generative grammar. Common to all is the effort to come up with a set of rules or principles that formally defines each and every one of the members of the set of well-formed expressions of a natural language. The term generative grammar has been associated with at least the following schools of linguistics:

Historical development of models of transformational grammar

Although Leonard Bloomfield, whose work Chomsky rejects, saw the ancient Indian grammarian Pāṇini as an antecedent of structuralism,[3][4] Chomsky, in an award acceptance speech delivered in India in 2001, claimed "The first generative grammar in the modern sense was Panini's grammar".

Generative grammar has been under development since the late 1950s, and has undergone many changes in the types of rules and representations that are used to predict grammaticality. In tracing the historical development of ideas within generative grammar, it is useful to refer to various stages in the development of the theory.

Standard theory (1957–1965)

The so-called standard theory corresponds to the original model of generative grammar laid out by Chomsky in 1965.

A core aspect of standard theory is the distinction between two different representations of a sentence, called deep structure and surface structure. The two representations are linked to each other by transformational grammar.

Extended standard theory (1965–1973)

The so-called extended standard theory was formulated in the late 1960s and early 1970s. Features are:

  • syntactic constraints
  • generalized phrase structures (X-bar theory)

Revised extended standard theory (1973–1976)

The so-called revised extended standard theory was formulated between 1973 and 1976. It contains

Relational grammar (ca. 1975–1990)

An alternative model of syntax based on the idea that notions like subject, direct object, and indirect object play a primary role in grammar.

Government and binding/Principles and parameters theory (1981–1990)

Chomsky's Lectures on Government and Binding (1981) and Barriers (1986).

Minimalist program (1990–present)

Context-free grammars

Generative grammars can be described and compared with the aid of the Chomsky hierarchy (proposed by Chomsky in the 1950s). This sets out a series of types of formal grammars with increasing expressive power. Among the simplest types are the regular grammars (type 3); Chomsky claims that these are not adequate as models for human language, because of the allowance of the center-embedding of strings within strings, in all natural human languages.

At a higher level of complexity are the context-free grammars (type 2). The derivation of a sentence by such a grammar can be depicted as a derivation tree. Linguists working within generative grammar often view such trees as a primary object of study. According to this view, a sentence is not merely a string of words. Instead, adjacent words are combined into constituents, which can then be further combined with other words or constituents to create a hierarchical tree-structure.

The derivation of a simple tree-structure for the sentence "the dog ate the bone" proceeds as follows. The determiner the and noun dog combine to create the noun phrase the dog. A second noun phrase the bone is created with determiner the and noun bone. The verb ate combines with the second noun phrase, the bone, to create the verb phrase ate the bone. Finally, the first noun phrase, the dog, combines with the verb phrase, ate the bone, to complete the sentence: the dog ate the bone. The following tree diagram illustrates this derivation and the resulting structure:

Basic english syntax tree

Basic english syntax tree

Such a tree diagram is also called a phrase marker. They can be represented more conveniently in text form, (though the result is less easy to read); in this format the above sentence would be rendered as:
[S [NP [D The ] [N dog ] ] [VP [V ate ] [NP [D the ] [N bone ] ] ] ]

Chomsky has argued that phrase structure grammars are also inadequate for describing natural languages, and formulated the more complex system of transformational grammar.[5]

Music

Generative grammar has been used to a limited extent in music theory and analysis since the 1980s.[6][7] The most well-known approaches were developed by Mark Steedman[8] as well as Fred Lerdahl and Ray Jackendoff,[9] who formalized and extended ideas from Schenkerian analysis.[10] More recently, such early generative approaches to music were further developed and extended by various scholars.[11][12][13][14] The theory of generative grammar has been manipulated by the Sun Ra Revival Post-Krautrock Archestra in the development of their post-structuralist lyrics. This is particularly emphasised in their song "Sun Ra Meets Terry Lee". French Composer Philippe Manoury applied the systematic of generative grammar to the field of contemporary classical music.

See also

References

  1. ^ "Tool Module: Chomsky's Universal Grammar". thebrain.mcgill.ca. Retrieved 2017-08-28.
  2. ^ "Mod 4 Lesson 4.2.3 Generative-Transformational Grammar Theory". www2.leeward.hawaii.edu. Retrieved 2017-02-02.
  3. ^ Bloomfield, Leonard, 1929, 274; cited in Rogers, David, 1987, 88
  4. ^ Hockett, Charles, 1987, 41
  5. ^ Chomsky, Noam (1956). "Three models for the description of language" (PDF). IRE Transactions on Information Theory. 2 (3): 113–124. doi:10.1109/TIT.1956.1056813. Archived from the original (PDF) on 2010-09-19.
  6. ^ Baroni, M., Maguire, S., and Drabkin, W. (1983). The Concept of Musical Grammar. Music Analysis, 2:175–208.
  7. ^ Baroni, M. and Callegari, L. (1982) Eds., Musical grammars and computer analysis. Leo S. Olschki Editore: Firenze, 201–218.
  8. ^ Steedman, M.J. (1989). "A Generative Grammar for Jazz Chord Sequences". Music Perception. 2 (1): 52–77. doi:10.2307/40285282. JSTOR 40285282.
  9. ^ Lerdahl, Fred; Ray Jackendoff (1996). A Generative Theory of Tonal Music. Cambridge: MIT Press. ISBN 978-0-262-62107-6.
  10. ^ Heinrich Schenker, Free Composition. (Der Freie Satz) translated and edited by Ernst Ostler. New York: Longman, 1979.
  11. ^ Tojo, O. Y. & Nishida, M. (2006). Analysis of chord progression by HPSG. In Proceedings of the 24th IASTED international conference on Artificial intelligence and applications, 305–310.
  12. ^ Rohrmeier, Martin (2007). A generative grammar approach to diatonic harmonic structure. In Spyridis, Georgaki, Kouroupetroglou, Anagnostopoulou (Eds.), Proceedings of the 4th Sound and Music Computing Conference, 97–100. http://smc07.uoa.gr/SMC07%20Proceedings/SMC07%20Paper%2015.pdf
  13. ^ Giblin, Iain (2008). Music and the generative enterprise. Doctoral dissertation. University of New South Wales.
  14. ^ Katz, Jonah; David Pesetsky (2009) "The Identity Thesis for Language and Music". http://ling.auf.net/lingBuzz/000959

[1][2]

Further reading

  • Chomsky, Noam. 1965. Aspects of the theory of syntax. Cambridge, Massachusetts: MIT Press.
  • Hurford, J. (1990) Nativist and functional explanations in language acquisition. In I. M. Roca (ed.), Logical Issues in Language Acquisition, 85–136. Foris, Dordrecht.
  • Isac, Daniela; Charles Reiss (2013). I-language: An Introduction to Linguistics as Cognitive Science, 2nd edition. Oxford University Press. ISBN 978-0-19-953420-3.
  1. ^ "Mod 4 Lesson 4.2.3 Generative-Transformational Grammar Theory". www2.leeward.hawaii.edu. Retrieved 2017-02-02.
  2. ^ Kamalani Hurley, Pat. "Mod 4 Lesson 4.2.3 Generative-Transformational Grammar Theory". www2.leeward.hawaii.edu. Retrieved 2017-02-02.
Anaphora (linguistics)

In linguistics, anaphora () is the use of an expression whose interpretation depends upon another expression in context (its antecedent or postcedent). In a narrower sense, anaphora is the use of an expression that depends specifically upon an antecedent expression and thus is contrasted with cataphora, which is the use of an expression that depends upon a postcedent expression. The anaphoric (referring) term is called an anaphor. For example, in the sentence Sally arrived, but nobody saw her, the pronoun her is an anaphor, referring back to the antecedent Sally. In the sentence Before her arrival, nobody saw Sally, the pronoun her refers forward to the postcedent Sally, so her is now a cataphor (and an anaphor in the broader, but not the narrower, sense). Usually, an anaphoric expression is a proform or some other kind of deictic (contextually-dependent) expression. Both anaphora and cataphora are species of endophora, referring to something mentioned elsewhere in a dialog or text.

Anaphora is an important concept for different reasons and on different levels: first, anaphora indicates how discourse is constructed and maintained; second, anaphora binds different syntactical elements together at the level of the sentence; third, anaphora presents a challenge to natural language processing in computational linguistics, since the identification of the reference can be difficult; and fourth, anaphora tells some things about how language is understood and processed, which is relevant to fields of linguistics interested in cognitive psychology.

Aspects of the Theory of Syntax

Aspects of the Theory of Syntax (known in linguistic circles simply as Aspects) is a book on linguistics written by American linguist Noam Chomsky, first published in 1965. In Aspects, Chomsky presented a deeper, more extensive reformulation of transformational generative grammar (TGG), a new kind of syntactic theory that he had introduced in the 1950s with the publication of his first book, Syntactic Structures. Aspects is widely considered to be the foundational document and a proper book-length articulation of Chomskyan theoretical framework of linguistics. It presented Chomsky's epistemological assumptions with a view to establishing linguistic theory-making as a formal (i.e. based on the manipulation of symbols and rules) discipline comparable to physical sciences, i.e. a domain of inquiry well-defined in its nature and scope. From a philosophical perspective, it directed mainstream linguistic research away from behaviorism, constructivism, empiricism and structuralism and towards mentalism, nativism, rationalism and generativism, respectively, taking as its main object of study the abstract, inner workings of the human mind related to language acquisition and production.

Chord rewrite rules

In music, a rewrite rule is a recursive generative grammar, which creates a chord progression from another.

Steedman (1984) has proposed a set of recursive "rewrite rules" which generate all well-formed transformations of jazz, basic I–IV–I–V–I twelve-bar blues chord sequences, and, slightly modified, non-twelve-bar blues I–IV–V sequences ("rhythm changes").

The typical 12-bar blues progression can be notated

1 2 3 4 5 6 7 8 9 10 11 12

I / I / I / I // IV /IV / I / I // V / IV / I / I

where the top line numbers each bar, one slash indicates a bar line, two indicate both a bar line and a phrase ending and a Roman numeral indicates the chord function.

Important transformations include

replacement or substitution of a chord by its dominant or subdominant:

1 2 3 4 5 6 7 8 9 10 11 12

I / IV / I / I7 // IV / VII7 / III7 / VI7 // II7 / V7 / I / I //

use of chromatic passing chords: ...7 8 9 ...

...III7 / ♭III7 / II7...

and chord alterations such as minor chords, diminished sevenths, etc.Sequences by fourth, rather than fifth, include Jimi Hendrix's version of "Hey Joe" and Deep Purple's "Hush":

1 2 3 4 5 6 7 8 9 10 11 12

♭VI, ♭III / ♭VII, IV / I / I // ♭VI, ♭III / ♭VII, IV / I / I // ♭VI, ♭III / ♭VII, IV / I / I //

These often result in Aeolian harmony and lack perfect cadences (V–I). Middleton (1990) suggests that both modal and fourth-oriented structures, rather than being, "distortions or surface transformations of Schenker's favoured V-I kernel, are more likely branches of a deeper principle, that of tonic/not-tonic differentiation."

For the ♭ notation, see Borrowed chord.

Complementizer

In linguistics (especially generative grammar), complementizer or complementiser (glossing abbreviation: comp) is a lexical category (part of speech) that includes those words that can be used to turn a clause into the subject or object of a sentence. For example, the word that may be called a complementizer in English sentences like Mary believes that it is raining. The concept of complementizers is specific to certain modern grammatical theories; in traditional grammar, such words are normally considered conjunctions.

The standard abbreviation for complementizer is C. The complementizer is often held to be the syntactic head of a full clause, which is therefore often represented by the abbreviation CP (for complementizer phrase). Evidence that the complementizer functions as the head of its clause includes that it is commonly the last element in a clause in head-final languages like Korean or Japanese, in which other heads follow their complements, whereas it appears at the start of a clause in head-initial languages such as English, where heads normally precede their complements.

Constraint-based grammar

Constraint-based grammars can perhaps be best understood in contrast to generative grammars. Whereas a generative grammar lists all the transformations, merges, movements, and deletions that can result in all well-formed sentences, constraint-based grammars take the opposite approach: allowing anything that is not otherwise constrained.

"The grammar is nothing but a set of constraints that structures are required to satisfy in order to be considered well-formed." "A constraint-based grammar is more like a database or a knowledge representation system than it is like a collection of algorithms."Examples of such grammars include

the non-procedural variant of Transformational grammar (TG) of George Lakoff, that formulates constraints on potential tree sequences

Johnson and Postal’s formalization of Relational grammar (RG) (1980), Generalized phrase structure grammar (GPSG) in the variants developed by Gazdar et al. (1988), Blackburn et al. (1993) and Rogers (1997)

Lexical functional grammar (LFG) in the formalization of Ronald Kaplan (1995)

Head-driven phrase structure grammar (HPSG) in the formalization of King (1999)

Constraint Handling Rules (CHR) grammars

Deep structure and surface structure

Deep structure and surface structure (also D-structure and S-structure, although these abbreviated forms are sometimes used with distinct meanings) are concepts used in linguistics, specifically in the study of syntax in the Chomskyan tradition of transformational generative grammar.

The deep structure of a linguistic expression is a theoretical construct that seeks to unify several related structures. For example, the sentences "Pat loves Chris" and "Chris is loved by Pat" mean roughly the same thing and use similar words. Some linguists, Chomsky in particular, have tried to account for this similarity by positing that these two sentences are distinct surface forms that derive from a common (or very similar) deep structure.

Frederick Newmeyer

Frederick J. (Fritz) Newmeyer (born January 30, 1944) is Professor Emeritus of Linguistics at the University of Washington and adjunct professor in the University of British Columbia Department of Linguistics and the Simon Fraser University Department of Linguistics. He has published widely in theoretical and English syntax and is best known for his work on the history of generative syntax and for his arguments that linguistic formalism (i.e. generative grammar) and linguistic functionalism are not incompatible, but rather complementary. In the early 1990s he was one of the linguists who helped to renew interest in the evolutionary origin of language. More recently, Newmeyer argued that facts about linguistic typology are better explained by parsing constraints than by the principles and parameters model of grammar. Nevertheless, he has continued to defend the basic principles of generative grammar, arguing that Ferdinand de Saussure's langue/parole distinction as well Noam Chomsky's distinction between linguistic competence and linguistic performance are essentially correct.

Generative Linguistics in the Old World

Generative Linguistics in the Old World (known by its acronym GLOW) is an international organization, founded in 1977 and based in the Netherlands. Its goal is to further the study of Generative Grammar by organizing an annual Spring linguistics conference and periodical summer schools, and by publishing a newsletter that discusses current intellectual (and organizational) issues in the study of Generative Grammar.

It was founded in an attempt to provide an annual meeting for European researchers in Generative Grammar who felt themselves largely excluded from other organizations in the late 1970s. Its founding document, the so-called GLOW Manifesto authored by Jan Koster, Henk van Riemsdijk and Jean-Roger Vergnaud, declared that "generative linguistics acquired a new momentum in Europe after Chomsky's [1973 paper] 'Conditions on transformations'" and sought to reflect that momentum with a new organization.

By the beginning of the 21st century, GLOW had emerged as one of the leading organizations in linguistics internationally, as well as in Europe. "Sister conferences" to the annual GLOW meeting in Europe have been organized under the rubric "GLOW Asia" in Japan, Korea and India, and the European GLOW conference itself has travelled as far south as Morocco (and as far north as Tromsø). The 2015 Colloquium was held in Paris, the 2016 meeting took place Göttingen, in 2017 Leiden was the host, and in 2018 it is Budapest’s turn.

Linguistic competence

Linguistic competence is the system of linguistic knowledge possessed by native speakers of a language. It is distinguished from linguistic performance, which is the way a language system is used in communication. Noam Chomsky introduced this concept in his elaboration of generative grammar, where it has been widely adopted and competence is the only level of language that is studied.

According to Chomsky, competence is the ideal language system that enables speakers to produce and understand an infinite number of sentences in their language, and to distinguish grammatical sentences from ungrammatical sentences. This is unaffected by "grammatically irrelevant conditions" such as speech errors. In Chomsky's view, competence can be studied independently of language use, which falls under "performance", for example through introspection and grammaticality judgments by native speakers.

Many other linguists – functionalists, cognitive linguists, psycholinguists, sociolinguists and others have rejected this distinction, critiquing it as a concept that considers empirical work irrelevant and left out many important aspects of language use. Also, it has been argued that the distinction is often used exclude real data that is, in the words of William Labov, "inconvenient to handle" within generativist theory.

Markedness

In linguistics and social sciences, markedness is the state of standing out as unusual or divergent in comparison to a more common or regular form. In a marked–unmarked relation, one term of an opposition is the broader, dominant one. The dominant default or minimum-effort form is known as unmarked; the other, secondary one is marked. In other words, markedness involves the characterization of a "normal" linguistic unit against one or more of its possible "irregular" forms.

In linguistics, markedness can apply to, among others, phonological, grammatical, and semantic oppositions, defining them in terms of marked and unmarked oppositions, such as honest (unmarked) vs. dishonest (marked). Marking may be purely semantic, or may be realized as extra morphology. The term derives from the marking of a grammatical role with a suffix or another element, and has been extended to situations where there is no morphological distinction.

In social sciences more broadly, markedness is, among other things, used to distinguish two meanings of the same term, where one is common usage (unmarked sense) and the other is specialized to a certain cultural context (marked sense).

In statistics and psychology, the social science concept of markedness is quantified as a measure of how much one variable is marked as a predictor or possible cause of another, and is also known as Δp (deltaP) in simple two-choice cases. See confusion matrix for more details.

Nicolas Ruwet

Nicolas Ruwet (December 31, 1932 – November 15, 2001) was a linguist, literary critic and musical analyst. He was involved with the development of generative grammar.

Operator (linguistics)

In generative grammar, the technical term operator denotes a type of expression that enters into an a-bar movement dependency. One often says that the operator "binds a variable".Operators are often determiners, such as interrogatives ('which', 'who', 'when', etc.), or quantifiers ('every', 'some', 'most', 'no'), but adverbs such as sentential negation ('not') have also been treated as operators. It is also common within generative grammar to hypothesise phonetically empty operators whenever a clause type or construction exhibits symptoms of the presence of an a-bar movement dependency, such as sensitivity to extraction islands.

Parse tree

A parse tree or parsing tree or derivation tree or concrete syntax tree is an ordered, rooted tree that represents the syntactic structure of a string according to some context-free grammar. The term parse tree itself is used primarily in computational linguistics; in theoretical syntax, the term syntax tree is more common.

Parse trees concretely reflect the syntax of the input language, making them distinct from the abstract syntax trees used in computer programming. Unlike Reed-Kellogg sentence diagrams used for teaching grammar, parse trees do not use distinct symbol shapes for different types of constituents.

Parse trees are usually constructed based on either the constituency relation of constituency grammars (phrase structure grammars) or the dependency relation of dependency grammars. Parse trees may be generated for sentences in natural languages (see natural language processing), as well as during processing of computer languages, such as programming languages.A related concept is that of phrase marker or P-marker, as used in transformational generative grammar. A phrase marker is a linguistic expression marked as to its phrase structure. This may be presented in the form of a tree, or as a bracketed expression. Phrase markers are generated by applying phrase structure rules, and themselves are subject to further transformational rules. A set of possible parse trees for a syntactically ambiguous sentence is called a "parse forest."

Ray Jackendoff

Ray Jackendoff (born January 23, 1945) is an American linguist. He is professor of philosophy, Seth Merrin Chair in the Humanities and, with Daniel Dennett, co-director of the Center for Cognitive Studies at Tufts University. He has always straddled the boundary between generative linguistics and cognitive linguistics, committed to both the existence of an innate universal grammar (an important thesis of generative linguistics) and to giving an account of language that is consistent with the current understanding of the human mind and cognition (the main purpose of cognitive linguistics).

Jackendoff's research deals with the semantics of natural language, its bearing on the formal structure of cognition, and its lexical and syntactic expression. He has conducted extensive research on the relationship between conscious awareness and the computational theory of mind, on syntactic theory, and, with Fred Lerdahl, on musical cognition, culminating in their generative theory of tonal music. His theory of conceptual semantics developed into a comprehensive theory on the foundations of language, which indeed is the title of a recent monograph (2002): Foundations of Language. Brain, Meaning, Grammar, Evolution. In his 1983 Semantics and Cognition, he was one of the first linguists to integrate the visual faculty into his account of meaning and human language.

Jackendoff studied under linguists Noam Chomsky and Morris Halle at the Massachusetts Institute of Technology, where he received his PhD in linguistics in 1969. Before moving to Tufts in 2005, Jackendoff was professor of linguistics and chair of the linguistics program at Brandeis University from 1971 to 2005. During the 2009 spring semester, he was an external professor at the Santa Fe Institute. Jackendoff was awarded the Jean Nicod Prize in 2003. He received the 2014 David E. Rumelhart Prize, the premier award in the field of cognitive science. He has also been granted honorary degrees by the Université du Québec à Montréal (2010), the National Music University of Bucharest (2011), the Music Academy of Cluj-Napoca (2011), the Ohio State University (2012), and Tel Aviv University (2013).

Recursive categorical syntax

Recursive categorical syntax, also sometimes called algebraic syntax, is an algebraic theory of syntax developed by Michael Brame as an alternative to transformational-generative grammar.

Syntax

In linguistics, syntax () is the set of rules, principles, and processes that govern the structure of sentences (sentence structure) in a given language, usually including word order. The term syntax is also used to refer to the study of such principles and processes. The goal of many syntacticians is to discover the syntactic rules common to all languages.

In mathematics, syntax refers to the rules governing the notation of mathematical systems, such as formal languages used in logic.

Transformational grammar

In linguistics, transformational grammar (TG) or transformational-generative grammar (TGG) is part of the theory of generative grammar, especially of natural languages. It considers grammar to be a system of rules that generate exactly those combinations of words that form grammatical sentences in a given language and involves the use of defined operations (called transformations) to produce new sentences from existing ones.

Underlying representation

In some models of phonology as well as morphophonology in the field of linguistics, the underlying representation (UR) or underlying form (UF) of a word or morpheme is the abstract form that a word or morpheme is postulated to have before any phonological rules have applied to it. By contrast, a surface representation is the phonetic representation of the word or sound. The concept of an underlying representation is central to generative grammar.If more phonological rules apply to the same underlying form, they can apply wholly independently of each other or in a feeding or counterbleeding order. The underlying representation of a morpheme is considered to be invariable across related forms (except in cases of suppletion), despite alternations among various allophones on the surface.

Select
bibliography
Filmography
Family

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.