Phonology

Phonology is a branch of linguistics concerned with the systematic organization of sounds in spoken languages and signs in sign languages. It used to be only the study of the systems of phonemes in spoken languages (and therefore used to be also called phonemics, or phonematics), but it may also cover any linguistic analysis either at a level beneath the word (including syllable, onset and rime, articulatory gestures, articulatory features, mora, etc.) or at all levels of language where sound or signs are structured to convey linguistic meaning.[1]

Sign languages have a phonological system equivalent to the system of sounds in spoken languages. The building blocks of signs are specifications for movement, location and handshape.[1]

Terminology

The word 'phonology' (as in the phonology of English) can also refer to the phonological system (sound system) of a given language. This is one of the fundamental systems which a language is considered to comprise, like its syntax and its vocabulary.

Phonology is often distinguished from phonetics. While phonetics concerns the physical production, acoustic transmission and perception of the sounds of speech,[2][3] phonology describes the way sounds function within a given language or across languages to encode meaning. For many linguists, phonetics belongs to descriptive linguistics, and phonology to theoretical linguistics, although establishing the phonological system of a language is necessarily an application of theoretical principles to analysis of phonetic evidence. Note that this distinction was not always made, particularly before the development of the modern concept of the phoneme in the mid 20th century. Some subfields of modern phonology have a crossover with phonetics in descriptive disciplines such as psycholinguistics and speech perception, resulting in specific areas like articulatory phonology or laboratory phonology.

Derivation and definitions

The word phonology comes from Ancient Greek φωνή, phōnḗ, "voice, sound," and the suffix -logy (which is from Greek λόγος, lógos, "word, speech, subject of discussion"). Definitions of the term vary. Nikolai Trubetzkoy in Grundzüge der Phonologie (1939) defines phonology as "the study of sound pertaining to the system of language," as opposed to phonetics, which is "the study of sound pertaining to the act of speech" (the distinction between language and speech being basically Saussure's distinction between langue and parole).[4] More recently, Lass (1998) writes that phonology refers broadly to the subdiscipline of linguistics concerned with the sounds of language, while in more narrow terms, "phonology proper is concerned with the function, behavior and organization of sounds as linguistic items."[2] According to Clark et al. (2007), it means the systematic use of sound to encode meaning in any spoken human language, or the field of linguistics studying this use.[5]

History

Early evidence for a systematic study of the sounds in a language appears in the 4th century BCE Ashtadhyayi, a Sanskrit grammar composed by Pāṇini. In particular the Shiva Sutras, an auxiliary text to the Ashtadhyayi, introduces what may be considered a list of the phonemes of the Sanskrit language, with a notational system for them that is used throughout the main text, which deals with matters of morphology, syntax and semantics.

The study of phonology as it exists today is defined by the formative studies of the 19th-century Polish scholar Jan Baudouin de Courtenay, who (together with his students Mikołaj Kruszewski and Lev Shcherba) shaped the modern usage of the term phoneme in a series of lectures in 1876-1877. The word phoneme had been coined a few years earlier in 1873 by the French linguist A. Dufriche-Desgenettes. In a paper read at the 24th of May meeting of the Société de Linguistique de Paris,[6] Dufriche-Desgenettes proposed that phoneme serve as a one-word equivalent for the German Sprachlaut.[7] Baudouin de Courtenay's subsequent work, though often unacknowledged, is considered to be the starting point of modern phonology. He also worked on the theory of phonetic alternations (what is now called allophony and morphophonology), and may have had an influence on the work of Saussure according to E. F. K. Koerner.[8]

Nikolai Trubetzkoy
Nikolai Trubetzkoy, 1920s

An influential school of phonology in the interwar period was the Prague school. One of its leading members was Prince Nikolai Trubetzkoy, whose Grundzüge der Phonologie (Principles of Phonology),[4] published posthumously in 1939, is among the most important works in the field from this period. Directly influenced by Baudouin de Courtenay, Trubetzkoy is considered the founder of morphophonology, although this concept had also been recognized by de Courtenay. Trubetzkoy also developed the concept of the archiphoneme. Another important figure in the Prague school was Roman Jakobson, who was one of the most prominent linguists of the 20th century.

In 1968 Noam Chomsky and Morris Halle published The Sound Pattern of English (SPE), the basis for generative phonology. In this view, phonological representations are sequences of segments made up of distinctive features. These features were an expansion of earlier work by Roman Jakobson, Gunnar Fant, and Morris Halle. The features describe aspects of articulation and perception, are from a universally fixed set, and have the binary values + or −. There are at least two levels of representation: underlying representation and surface phonetic representation. Ordered phonological rules govern how underlying representation is transformed into the actual pronunciation (the so-called surface form). An important consequence of the influence SPE had on phonological theory was the downplaying of the syllable and the emphasis on segments. Furthermore, the generativists folded morphophonology into phonology, which both solved and created problems.

Natural phonology is a theory based on the publications of its proponent David Stampe in 1969 and (more explicitly) in 1979. In this view, phonology is based on a set of universal phonological processes that interact with one another; which ones are active and which are suppressed is language-specific. Rather than acting on segments, phonological processes act on distinctive features within prosodic groups. Prosodic groups can be as small as a part of a syllable or as large as an entire utterance. Phonological processes are unordered with respect to each other and apply simultaneously (though the output of one process may be the input to another). The second most prominent natural phonologist is Patricia Donegan (Stampe's wife); there are many natural phonologists in Europe, and a few in the U.S., such as Geoffrey Nathan. The principles of natural phonology were extended to morphology by Wolfgang U. Dressler, who founded natural morphology.

In 1976, John Goldsmith introduced autosegmental phonology. Phonological phenomena are no longer seen as operating on one linear sequence of segments, called phonemes or feature combinations, but rather as involving some parallel sequences of features which reside on multiple tiers. Autosegmental phonology later evolved into feature geometry, which became the standard theory of representation for theories of the organization of phonology as different as lexical phonology and optimality theory.

Government phonology, which originated in the early 1980s as an attempt to unify theoretical notions of syntactic and phonological structures, is based on the notion that all languages necessarily follow a small set of principles and vary according to their selection of certain binary parameters. That is, all languages' phonological structures are essentially the same, but there is restricted variation that accounts for differences in surface realizations. Principles are held to be inviolable, though parameters may sometimes come into conflict. Prominent figures in this field include Jonathan Kaye, Jean Lowenstamm, Jean-Roger Vergnaud, Monik Charette, and John Harris.

In a course at the LSA summer institute in 1991, Alan Prince and Paul Smolensky developed optimality theory—an overall architecture for phonology according to which languages choose a pronunciation of a word that best satisfies a list of constraints ordered by importance; a lower-ranked constraint can be violated when the violation is necessary in order to obey a higher-ranked constraint. The approach was soon extended to morphology by John McCarthy and Alan Prince, and has become a dominant trend in phonology. The appeal to phonetic grounding of constraints and representational elements (e.g. features) in various approaches has been criticized by proponents of 'substance-free phonology', especially by Mark Hale and Charles Reiss.[9][10]

An integrated approach to phonological theory that combines synchronic and diachronic accounts to sound patterns was initiated with Evolutionary Phonology in recent years.[11]

Analysis of phonemes

An important part of traditional, pre-generative schools of phonology is studying which sounds can be grouped into distinctive units within a language; these units are known as phonemes. For example, in English, the "p" sound in pot is aspirated (pronounced [pʰ]) while that in spot is not aspirated (pronounced [p]). However, English speakers intuitively treat both sounds as variations (allophones) of the same phonological category, that is of the phoneme /p/. (Traditionally, it would be argued that if an aspirated [pʰ] were interchanged with the unaspirated [p] in spot, native speakers of English would still hear the same words; that is, the two sounds are perceived as "the same" /p/.) In some other languages, however, these two sounds are perceived as different, and they are consequently assigned to different phonemes. For example, in Thai, Hindi, and Quechua, there are minimal pairs of words for which aspiration is the only contrasting feature (two words can have different meanings but with the only difference in pronunciation being that one has an aspirated sound where the other has an unaspirated one).

Phonological Diagram of modern Arabic and Hebrew vowels
The vowels of modern (Standard) Arabic and (Israeli) Hebrew from the phonemic point of view. Note the intersection of the two circles—the distinction between short a, i and u is made by both speakers, but Arabic lacks the mid articulation of short vowels, while Hebrew lacks the distinction of vowel length.
Phonetic Diagram of modern Arabic and Hebrew vowels
The vowels of modern (Standard) Arabic and (Israeli) Hebrew from the phonetic point of view. Note that the two circles are totally separate—none of the vowel-sounds made by speakers of one language is made by speakers of the other.

Part of the phonological study of a language therefore involves looking at data (phonetic transcriptions of the speech of native speakers) and trying to deduce what the underlying phonemes are and what the sound inventory of the language is. The presence or absence of minimal pairs, as mentioned above, is a frequently used criterion for deciding whether two sounds should be assigned to the same phoneme. However, other considerations often need to be taken into account as well.

The particular contrasts which are phonemic in a language can change over time. At one time, [f] and [v], two sounds that have the same place and manner of articulation and differ in voicing only, were allophones of the same phoneme in English, but later came to belong to separate phonemes. This is one of the main factors of historical change of languages as described in historical linguistics.

The findings and insights of speech perception and articulation research complicate the traditional and somewhat intuitive idea of interchangeable allophones being perceived as the same phoneme. First, interchanged allophones of the same phoneme can result in unrecognizable words. Second, actual speech, even at a word level, is highly co-articulated, so it is problematic to expect to be able to splice words into simple segments without affecting speech perception.

Different linguists therefore take different approaches to the problem of assigning sounds to phonemes. For example, they differ in the extent to which they require allophones to be phonetically similar. There are also differing ideas as to whether this grouping of sounds is purely a tool for linguistic analysis, or reflects an actual process in the way the human brain processes a language.

Since the early 1960s, theoretical linguists have moved away from the traditional concept of a phoneme, preferring to consider basic units at a more abstract level, as a component of morphemes; these units can be called morphophonemes, and analysis using this approach is called morphophonology.

Other topics in phonology

In addition to the minimal units that can serve the purpose of differentiating meaning (the phonemes), phonology studies how sounds alternate, i.e. replace one another in different forms of the same morpheme (allomorphs), as well as, for example, syllable structure, stress, feature geometry, accent, and intonation.

Phonology also includes topics such as phonotactics (the phonological constraints on what sounds can appear in what positions in a given language) and phonological alternation (how the pronunciation of a sound changes through the application of phonological rules, sometimes in a given order which can be feeding or bleeding,[12]) as well as prosody, the study of suprasegmentals and topics such as stress and intonation.

The principles of phonological analysis can be applied independently of modality because they are designed to serve as general analytical tools, not language-specific ones. The same principles have been applied to the analysis of sign languages (see Phonemes in sign languages), even though the sub-lexical units are not instantiated as speech sounds.

See also

Notes

  1. ^ Stokoe, William C. (1960, 1978). Sign language structure: An outline of the visual communication systems of the American deaf. Studies in linguistics, Occasional papers, No. 8, Dept. of Anthropology and Linguistics, University at Buffalo. 2d ed., Silver Spring: Md: Linstok Press.
  2. ^ a b Lass, Roger (1998). "Phonology: An Introduction to Basic Concepts". Cambridge, UK; New York; Melbourne, Australia: Cambridge University Press: 1. ISBN 0-521-23728-9. Retrieved 8 January 2011  Paperback ISBN 0-521-28183-0 templatestyles stripmarker in |postscript= at position 23 (help)
  3. ^ Carr, Philip (2003). English Phonetics and Phonology: An Introduction. Massachusetts, USA; Oxford, UK; Victoria, Australia; Berlin, Germany: Blackwell Publishing. ISBN 0-631-19775-3. Retrieved 8 January 2011  Paperback ISBN 0-631-19776-1 templatestyles stripmarker in |postscript= at position 23 (help)
  4. ^ a b Trubetzkoy N., Grundzüge der Phonologie (published 1939), translated by C. Baltaxe as Principles of Phonology, University of California Press, 1969
  5. ^ Clark, John; Yallop, Colin; Fletcher, Janet (2007). An Introduction to Phonetics and Phonology (3rd ed.). Massachusetts, USA; Oxford, UK; Victoria, Australia: Blackwell Publishing. ISBN 978-1-4051-3083-7. Retrieved 8 January 2011  Alternative ISBN 1-4051-3083-0 templatestyles stripmarker in |postscript= at position 25 (help)
  6. ^ Anon (probably Louis Havet). (1873) "Sur la nature des consonnes nasales". Revue critique d'histoire et de littérature 13, No. 23, p. 368.
  7. ^ Roman Jakobson, Selected Writings: Word and Language, Volume 2, Walter de Gruyter, 1971, p. 396.
  8. ^ E. F. K. Koerner, Ferdinand de Saussure: Origin and Development of His Linguistic Thought in Western Studies of Language. A contribution to the history and theory of linguistics, Braunschweig: Friedrich Vieweg & Sohn [Oxford & Elmsford, N.Y.: Pergamon Press], 1973.
  9. ^ Hale, Mark; Reiss, Charles (2008). The Phonological Enterprise. Oxford, UK: Oxford University Press. ISBN 0-19-953397-0.
  10. ^ Hale, Mark; Reiss, Charles (2000). "Substance abuse and dysfunctionalism: Current trends in phonology. Linguistic Inquiry 31: 157-169 (2000)".
  11. ^ Blevins, Juliette. 2004. Evolutionary phonology: The emergence of sound patterns. Cambridge University Press.
  12. ^ Goldsmith 1995:1.

Bibliography

  • Anderson, John M.; and Ewen, Colin J. (1987). Principles of dependency phonology. Cambridge: Cambridge University Press.
  • Bloch, Bernard (1941). "Phonemic overlapping". American Speech. 16 (4): 278–284. doi:10.2307/486567. JSTOR 486567.
  • Bloomfield, Leonard. (1933). Language. New York: H. Holt and Company. (Revised version of Bloomfield's 1914 An introduction to the study of language).
  • Brentari, Diane (1998). A prosodic model of sign language phonology. Cambridge, MA: MIT Press.
  • Chomsky, Noam. (1964). Current issues in linguistic theory. In J. A. Fodor and J. J. Katz (Eds.), The structure of language: Readings in the philosophy language (pp. 91–112). Englewood Cliffs, NJ: Prentice-Hall.
  • Chomsky, Noam; and Halle, Morris. (1968). The sound pattern of English. New York: Harper & Row.
  • Clements, George N. (1985). "The geometry of phonological features". Phonology Yearbook. 2: 225–252. doi:10.1017/S0952675700000440.
  • Clements, George N.; and Samuel J. Keyser. (1983). CV phonology: A generative theory of the syllable. Linguistic inquiry monographs (No. 9). Cambridge, MA: MIT Press. ISBN 0-262-53047-3 (pbk); ISBN 0-262-03098-5 (hbk).
  • de Lacy, Paul, ed. (2007). The Cambridge Handbook of Phonology. Cambridge University Press. ISBN 0-521-84879-2. Retrieved 8 January 2011
  • Donegan, Patricia. (1985). On the Natural Phonology of Vowels. New York: Garland. ISBN 0-8240-5424-5.
  • Firth, J. R. (1948). "Sounds and prosodies". Transactions of the Philological Society. 47 (1): 127–152. doi:10.1111/j.1467-968X.1948.tb00556.x.
  • Gilbers, Dicky; de Hoop, Helen (1998). "Conflicting constraints: An introduction to optimality theory". Lingua. 104: 1–12. doi:10.1016/S0024-3841(97)00021-1.
  • Goldsmith, John A. (1979). The aims of autosegmental phonology. In D. A. Dinnsen (Ed.), Current approaches to phonological theory (pp. 202–222). Bloomington: Indiana University Press.
  • Goldsmith, John A. (1989). Autosegmental and metrical phonology: A new synthesis. Oxford: Basil Blackwell.
  • Goldsmith, John A. (1995). "Phonological Theory". In John A. Goldsmith (ed.). The Handbook of Phonological Theory. Blackwell Handbooks in Linguistics. Blackwell Publishers. ISBN 1-4051-5768-2.
  • Gussenhoven, Carlos & Jacobs, Haike. "Understanding Phonology", Hodder & Arnold, 1998. 2nd edition 2005.
  • Hale, Mark; Reiss, Charles (2008). The Phonological Enterprise. Oxford, UK: Oxford University Press. ISBN 0-19-953397-0.
  • Halle, Morris (1954). "The strategy of phonemics". Word. 10: 197–209.
  • Halle, Morris. (1959). The sound pattern of Russian. The Hague: Mouton.
  • Harris, Zellig. (1951). Methods in structural linguistics. Chicago: Chicago University Press.
  • Hockett, Charles F. (1955). A manual of phonology. Indiana University publications in anthropology and linguistics, memoirs II. Baltimore: Waverley Press.
  • Hooper, Joan B. (1976). An introduction to natural generative phonology. New York: Academic Press.
  • Jakobson, Roman (1949). "On the identification of phonemic entities". Travaux du Cercle Linguistique de Copenhague. 5: 205–213. doi:10.1080/01050206.1949.10416304.
  • Jakobson, Roman; Fant, Gunnar; and Halle, Morris. (1952). Preliminaries to speech analysis: The distinctive features and their correlates. Cambridge, MA: MIT Press.
  • Kaisse, Ellen M.; and Shaw, Patricia A. (1985). On the theory of lexical phonology. In E. Colin and J. Anderson (Eds.), Phonology Yearbook 2 (pp. 1–30).
  • Kenstowicz, Michael. Phonology in generative grammar. Oxford: Basil Blackwell.
  • Ladefoged, Peter. (1982). A course in phonetics (2nd ed.). London: Harcourt Brace Jovanovich.
  • Martinet, André (1949). Phonology as functional phonetics. Oxford:: Blackwell.
  • Martinet, André (1955). Économie des changements phonétiques: Traité de phonologie diachronique. Berne: A. Francke S.A.
  • Napoli, Donna Jo (1996). Linguistics: An Introduction. New York: Oxford University Press.
  • Pike, Kenneth Lee (1947). Phonemics: A technique for reducing languages to writing. Ann Arbor: University of Michigan Press.
  • Sandler, Wendy and Lillo-Martin, Diane. 2006. Sign language and linguistic universals. Cambridge: Cambridge University Press
  • Sapir, Edward (1925). "Sound patterns in language". Language. 1 (2): 37–51. doi:10.2307/409004. JSTOR 409004.
  • Sapir, Edward (1933). "La réalité psychologique des phonémes". Journal de Psychologie Normale et Pathologique. 30: 247–265.
  • de Saussure, Ferdinand. (1916). Cours de linguistique générale. Paris: Payot.
  • Stampe, David. (1979). A dissertation on natural phonology. New York: Garland.
  • Swadesh, Morris (1934). "The phonemic principle". Language. 10 (2): 117–129. doi:10.2307/409603. JSTOR 409603.
  • Trager, George L.; Bloch, Bernard (1941). "The syllabic phonemes of English". Language. 17 (3): 223–246. doi:10.2307/409203. JSTOR 409203.
  • Trubetzkoy, Nikolai. (1939). Grundzüge der Phonologie. Travaux du Cercle Linguistique de Prague 7.
  • Twaddell, William F. (1935). On defining the phoneme. Language monograph no. 16. Language.

External links

Azerbaijani language

Azerbaijani () or Azeri (), sometimes also Azeri Turkic or Azeri Turkish, is a term referring to two Turkic lects (Caucasian Azerbaijani and Iranian Azerbaijani) that are spoken primarily by the Azerbaijanis, who live mainly in Transcaucasia and Iran. Caucasian Azerbaijani and Iranian Azerbaijani have significant differences in phonology, lexicon, morphology, syntax, and loanwords. ISO 639-3 groups the two lects as a "macrolanguage".Azerbaijani has official status in the Republic of Azerbaijan and Dagestan (a federal subject of Russia) but Azerbaijani does not have official status in Iran, where the majority of Azerbaijanis live. It is also spoken to lesser varying degrees in Azerbaijani communities of Georgia and Turkey and by diaspora communities, primarily in Europe and North America.

Both Azerbaijani lects are members of the Oghuz branch of the Turkic languages. The standardized form of Caucasian Azerbaijani (spoken in the Republic of Azerbaijan and Russia) is based on the Shirvani dialect, while Iranian Azerbaijani uses the Tabrizi dialect as its prestige variety. Azerbaijani is closely related to Turkish, Qashqai, Gagauz, Turkmen and Crimean Tatar, sharing varying degrees of mutual intelligibility with each of those languages.

Bilabial nasal

The bilabial nasal is a type of consonantal sound used in almost all spoken languages. The symbol in the International Phonetic Alphabet that represents this sound is ⟨m⟩, and the equivalent X-SAMPA symbol is m. The bilabial nasal occurs in English, and it is the sound represented by "m" in map and rum.

It occurs nearly universally, and few languages (e.g. Mohawk) are known to lack this sound.

Close back rounded vowel

The close back rounded vowel, or high back rounded vowel, is a type of vowel sound used in many spoken languages. The symbol in the International Phonetic Alphabet that represents this sound is ⟨u⟩, and the equivalent X-SAMPA symbol is u.

In most languages, this rounded vowel is pronounced with protruded lips ('endolabial'). However, in a few cases the lips are compressed ('exolabial').

The close back rounded vowel is almost identical featurally to the labio-velar approximant [w]. [u] alternates with [w] in certain languages, such as French, and in the diphthongs of some languages, [u̯] with the non-syllabic diacritic and [w] are used in different transcription systems to represent the same sound.

Dental, alveolar and postalveolar lateral approximants

The alveolar lateral approximant is a type of consonantal sound used in many spoken languages. The symbol in the International Phonetic Alphabet that represents dental, alveolar, and postalveolar lateral approximants is ⟨l⟩, and the equivalent X-SAMPA symbol is l.

As a sonorant, lateral approximants are nearly always voiced. Voiceless lateral approximants, /l̥/ are common in Sino-Tibetan languages, but uncommon elsewhere. In such cases, voicing typically starts about halfway through the hold of the consonant. No language is known to contrast such a sound with a voiceless alveolar lateral fricative [ɬ].

In a number of languages, including most varieties of English, the phoneme /l/ becomes velarized ("dark l") in certain contexts. By contrast, the non-velarized form is the "clear l" (also known as: "light l"), which occurs before and between vowels in certain English standards. Some languages have only clear l. Others may not have a clear l at all, or only before front vowels (especially [i]).

Dental, alveolar and postalveolar trills

The alveolar trill is a type of consonantal sound, used in some spoken languages. The symbol in the International Phonetic Alphabet that represents dental, alveolar, and postalveolar trills is ⟨r⟩, and the equivalent X-SAMPA symbol is r. It is commonly called the rolled R, rolling R, or trilled R. Quite often, ⟨r⟩ is used in phonemic transcriptions (especially those found in dictionaries) of languages like English and German that have rhotic consonants that are not an alveolar trill. That is partly for ease of typesetting and partly because ⟨r⟩ is the letter used in the orthographies of such languages.

In most Indo-European languages, the sound is at least occasionally allophonic with an alveolar tap [ɾ], particularly in unstressed positions. Exceptions include Albanian, Spanish, Cypriot Greek, and a number of Armenian and Portuguese dialects, which treat them as distinct phonemes. In a few languages, such as Batsbi, the tap is the vastly preferred allophone, to such a degree that it is treated as a phoneme whilst the trill is not.

People with ankyloglossia may find it exceptionally difficult to articulate the sound because of the limited mobility of their tongues.

English phonology

Like many other languages, English has wide variation in pronunciation, both historically and from dialect to dialect. In general, however, the regional dialects of English share a largely similar (but not identical) phonological system. Among other things, most dialects have vowel reduction in unstressed syllables and a complex set of phonological features that distinguish fortis and lenis consonants (stops, affricates, and fricatives). Most dialects of English preserve the consonant /w/ (spelled ⟨w⟩) and many preserve /θ, ð/ (spelled ⟨th⟩), while most other Germanic languages have shifted them to /v/ and /t, d/: compare English will (listen) and then (listen) with German will [vɪl] (listen) ('want') and denn [dɛn] (listen) ('because').

Phonological analysis of English often concentrates on or uses, as a reference point, one or more of the prestige or standard accents, such as Received Pronunciation for England, General American for the United States, and General Australian for Australia. Nevertheless, many other dialects of English are spoken, which have developed independently from these standardized accents, particularly regional dialects. Information about these standardized accents functions only as a limited guide to all of English phonology, which one can later expand upon once one becomes more familiar with some of the many other dialects of English that are spoken.

Georgian language

Georgian (ქართული ენა, translit.: kartuli ena, pronounced [kʰɑrtʰuli ɛnɑ]) is a Kartvelian language spoken by Georgians. It is the official language of Georgia. Georgian is written in its own writing system, the Georgian script. Georgian is the literary language for all regional subgroups of Georgians, including those who speak other Kartvelian languages: Svans, Mingrelians and the Laz.

Lao language

Lao, sometimes referred to as Laotian (ລາວ Lao or ພາສາລາວ Lao language), is a Kra–Dai language and the language of the ethnic Lao people. It is spoken in Laos, where it is the official language, as well as northeast Thailand, where it is usually referred to as Isan. Lao serves as a lingua franca among all citizens of Laos, who speak approximately 90 other languages, many of which are unrelated to Lao. Modern Lao (language) is heavily influenced by the Thai language. A vast number of technical terms as well as common usage are adopted directly from Thai.

Like other Tai languages, Lao is a tonal language and has a complex system of relational markers. Spoken Lao is mutually intelligible with Thai and Isan, fellow Southwestern Tai languages, to such a degree that their speakers are able to effectively communicate with one another speaking their respective languages. These languages are written with slightly different scripts but are linguistically similar and effectively form a dialect continuum.Although there is no official standard, the Vientiane dialect has become the de facto standard language in the second-half of the 20th century.

Near-open front unrounded vowel

The near-open front unrounded vowel, or near-low front unrounded vowel, is a type of vowel sound, used in some spoken languages. Acoustically it is simply an open or low front unrounded vowel. The symbol in the International Phonetic Alphabet that represents this sound is ⟨æ⟩, a lowercase of the ⟨Æ⟩ ligature. Both the symbol and the sound are commonly referred to as "ash".

The rounded counterpart of [æ], the near-open front rounded vowel (for which the IPA provides no separate symbol) has been reported to occur allophonically in Danish; see open front rounded vowel for more information.

In practice, ⟨æ⟩ is sometimes used to represent the open front unrounded vowel; see the introduction to that page for more information.

Open-mid front unrounded vowel

The open-mid front unrounded vowel, or low-mid front unrounded vowel, is a type of vowel sound used in some spoken languages. The symbol in the International Phonetic Alphabet that represents this sound is a Latinized variant of the Greek lowercase epsilon, ⟨ɛ⟩.

Sinhala language

Sinhala () (සිංහල; siṁhala [ˈsiŋɦələ]), also known as Sinhalese, is the native language of the Sinhalese people, who make up the largest ethnic group in Sri Lanka, numbering about 16 million. Sinhala is also spoken as a second language by other ethnic groups in Sri Lanka, totalling about four million. It belongs to the Indo-Aryan branch of the Indo-European languages. Sinhala is written using Sinhala script, which is one of the Brahmic scripts, a descendant of the ancient Indian Brahmi script closely related to the Kadamba script.Sinhala is one of the official and national languages of Sri Lanka. Sinhala, along with Pali, played a major role in the development of Theravada Buddhist literature.The oldest Sinhalese Prakrit inscriptions found are from the third to second century BCE following the arrival of Buddhism in Sri Lanka, the oldest extant literary works date from the ninth century. The closest relative of Sinhala is the Maldivian language.

Sinhala has two main varieties – written and spoken. It is a good example of the linguistic phenomenon known as diglossia.

Stress (linguistics)

In linguistics, and particularly phonology, stress or accent is relative emphasis or prominence given to a certain syllable in a word, or to a certain word in a phrase or sentence. This emphasis is typically caused by such properties as increased loudness and vowel length, full articulation of the vowel, and changes in pitch. The terms stress and accent are often used synonymously in this context, but they are sometimes distinguished. For example, when emphasis is produced through pitch alone, it is called pitch accent, and when produced through length alone, it is called quantitative accent. When caused by a combination of various intensified properties, it is called stress accent or dynamic accent; English uses what is called variable stress accent.

Since stress can be realised through a wide range of phonetic properties, such as loudness, vowel length, and pitch, which are also used for other linguistic functions, it is difficult to define stress solely phonetically.

The stress placed on syllables within words is called word stress or lexical stress. Some languages have fixed stress, meaning that the stress on virtually any multisyllable word falls on a particular syllable, such as the penultimate (e.g. Polish) or the first. Other languages, like English and Russian, have variable stress, where the position of stress in a word is not predictable in that way. Sometimes more than one level of stress, such as primary stress and secondary stress, may be identified. However, some languages, such as French and Mandarin, are sometimes analyzed as lacking lexical stress entirely.

The stress placed on words within sentences is called sentence stress or prosodic stress. This is one of the three components of prosody, along with rhythm and intonation. It includes phrasal stress (the default emphasis of certain words within phrases or clauses), and contrastive stress (used to highlight an item − a word, or occasionally just part of a word − that is given particular focus).

Voiced bilabial stop

The voiced bilabial stop is a type of consonantal sound, used in many spoken languages. The symbol in the International Phonetic Alphabet that represents this sound is ⟨b⟩, and the equivalent X-SAMPA symbol is b. The voiced bilabial stop occurs in English, and it is the sound denoted by the letter ⟨b⟩ in obey. Many Indian languages, such as Hindustani, distinguish between breathy voiced /bʱ/ and plain /b/.

Voiced dental and alveolar stops

The voiced alveolar stop is a type of consonantal sound, used in many spoken languages. The symbol in the International Phonetic Alphabet that represents voiced dental, alveolar, and postalveolar stops is ⟨d⟩ (although the symbol ⟨d̪⟩ can be used to distinguish the dental stop, and ⟨d̠⟩ the postalveolar), and the equivalent X-SAMPA symbol is d.

Voiced velar stop

The voiced velar stop is a type of consonantal sound, used in many spoken languages.

Some languages have the voiced pre-velar stop, which is articulated slightly more front compared with the place of articulation of the prototypical voiced velar stop, though not as front as the prototypical voiced palatal stop.

Conversely, some languages have the voiced post-velar stop, which is articulated slightly behind the place of articulation of the prototypical voiced velar stop, though not as back as the prototypical voiced uvular stop.

Voiceless dental and alveolar stops

The voiceless alveolar stop is a type of consonantal sound used in almost all spoken languages. The symbol in the International Phonetic Alphabet that represents voiceless dental, alveolar, and postalveolar stops is ⟨t⟩, and the equivalent X-SAMPA symbol is t. The dental stop can be distinguished with the underbridge diacritic, ⟨t̪⟩, the postalveolar with a retraction line, ⟨t̠⟩, and the Extensions to the IPA have a double underline diacritic which can be used to explicitly specify an alveolar pronunciation, ⟨t͇⟩.

The [t] sound is a very common sound cross-linguistically; the most common consonant phonemes of the world's languages are [t], [k] and [p]. Most languages have at least a plain [t], and some distinguish more than one variety. Some languages without a [t] are Hawaiian (except for Niʻihau; Hawaiian uses a voiceless velar stop [k] for loanwords with [t]), colloquial Samoan (which also lacks an [n]), Abau, and Nǁng of South Africa.

Voiceless labiodental fricative

The voiceless labiodental fricative is a type of consonantal sound, used in a number of spoken languages. The symbol in the International Phonetic Alphabet that represents this sound is ⟨f⟩.

Voiceless postalveolar fricative

Voiceless fricatives produced in the postalveolar region include the voiceless palato-alveolar fricative [ʃ], the voiceless postalveolar non-sibilant fricative [ɹ̠̊˔], the voiceless retroflex fricative [ʂ], and the voiceless alveolo-palatal fricative [ɕ]. This article discusses the first two.

Voiceless velar stop

The voiceless velar stop or voiceless velar plosive is a type of consonantal sound used in almost all spoken languages. The symbol in the International Phonetic Alphabet that represents this sound is ⟨k⟩, and the equivalent X-SAMPA symbol is k.

The [k] sound is a very common sound cross-linguistically. Most languages have at least a plain [k], and some distinguish more than one variety. Most Indo-Aryan languages, such as Hindi and Bengali, have a two-way contrast between aspirated and plain [k]. Only a few languages lack a voiceless velar stop, e.g. Tahitian.

Some languages have the voiceless pre-velar stop, which is articulated slightly more front compared with the place of articulation of the prototypical voiceless velar stop, though not as front as the prototypical voiceless palatal stop.

Conversely, some languages have the voiceless post-velar stop, which is articulated slightly behind the place of articulation of the prototypical voiceless velar stop, though not as back as the prototypical voiceless uvular stop.

Phonologies of the world's languages

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.