Sign languages (also known as signed languages) are languages that use the visual-manual modality to convey meaning. Language is expressed via the manual signstream in combination with non-manual elements. Sign languages are full-fledged natural languages with their own grammar and lexicon. This means that sign languages are not universal and they are not mutually intelligible, although there are also striking similarities among sign languages.
Linguists consider both spoken and signed communication to be types of natural language, meaning that both emerged through an abstract, protracted aging process and evolved over time without meticulous planning. Sign language should not be confused with body language, a type of nonverbal communication.
Wherever communities of deaf people exist, sign languages have developed as handy means of communication and they form the core of local deaf cultures. Although signing is used primarily by the deaf and hard of hearing, it is also used by hearing individuals, such as those unable to physically speak, those who have trouble with spoken language due to a disability or condition (augmentative and alternative communication), or those with deaf family members, such as children of deaf adults.
It is unclear how many sign languages currently exist worldwide. Each country generally has its own, native sign language, and some have more than one. The 2013 edition of Ethnologue lists 137 sign languages. Some sign languages have obtained some form of legal recognition, while others have no status at all.
Linguists distinguish natural sign languages from other systems that are precursors to them or derived from them, such as invented manual codes for spoken languages, home sign, "baby sign", and signs learned by non-human primates.
Groups of deaf people have used sign languages throughout history. One of the earliest written records of a sign language is from the fifth century BC, in Plato's Cratylus, where Socrates says: "If we hadn't a voice or a tongue, and wanted to express things to one another, wouldn't we try to make signs by moving our hands, head, and the rest of our body, just as dumb people do at present?"
Until the 19th century, most of what is known about historical sign languages is limited to the manual alphabets (fingerspelling systems) that were invented to facilitate transfer of words from a spoken language to a sign language, rather than documentation of the language itself. Pedro Ponce de León (1520–1584) is said to have developed the first manual alphabet.
In 1620, Juan Pablo Bonet published Reducción de las letras y arte para enseñar a hablar a los mudos (‘Reduction of letters and art for teaching mute people to speak’) in Madrid. It is considered the first modern treatise of sign language phonetics, setting out a method of oral education for deaf people and a manual alphabet.
In Britain, manual alphabets were also in use for a number of purposes, such as secret communication, public speaking, or communication by deaf people. In 1648, John Bulwer described "Master Babington", a deaf man proficient in the use of a manual alphabet, "contryved on the joynts of his fingers", whose wife could converse with him easily, even in the dark through the use of tactile signing.
In 1680, George Dalgarno published Didascalocophus, or, The deaf and dumb mans tutor, in which he presented his own method of deaf education, including an "arthrological" alphabet, where letters are indicated by pointing to different joints of the fingers and palm of the left hand. Arthrological systems had been in use by hearing people for some time; some have speculated that they can be traced to early Ogham manual alphabets.
The vowels of this alphabet have survived in the contemporary alphabets used in British Sign Language, Auslan and New Zealand Sign Language. The earliest known printed pictures of consonants of the modern two-handed alphabet appeared in 1698 with Digiti Lingua (Latin for Language [or Tongue] of the Finger), a pamphlet by an anonymous author who was himself unable to speak. He suggested that the manual alphabet could also be used by mutes, for silence and secrecy, or purely for entertainment. Nine of its letters can be traced to earlier alphabets, and 17 letters of the modern two-handed alphabet can be found among the two sets of 26 handshapes depicted.
Charles de La Fin published a book in 1692 describing an alphabetic system where pointing to a body part represented the first letter of the part (e.g. Brow=B), and vowels were located on the fingertips as with the other British systems. He described such codes for both English and Latin.
By 1720, the British manual alphabet had found more or less its present form. Descendants of this alphabet have been used by deaf communities (or at least in classrooms) in former British colonies India, Australia, New Zealand, Uganda and South Africa, as well as the republics and provinces of the former Yugoslavia, Grand Cayman Island in the Caribbean, Indonesia, Norway, Germany and the United States.
Frenchman Charles-Michel de l'Épée published his manual alphabet in the 18th century, which has survived basically unchanged in France and North America until the present time. In 1755, Abbé de l'Épée founded the first school for deaf children in Paris; Laurent Clerc was arguably its most famous graduate. Clerc went to the United States with Thomas Hopkins Gallaudet to found the American School for the Deaf in Hartford, Connecticut, in 1817. Gallaudet's son, Edward Miner Gallaudet, founded a school for the deaf in 1857 in Washington, D.C., which in 1864 became the National Deaf-Mute College. Now called Gallaudet University, it is still the only liberal arts university for deaf people in the world.
Sign languages generally do not have any linguistic relation to the spoken languages of the lands in which they arise. The correlation between sign and spoken languages is complex and varies depending on the country more than the spoken language. For example, the US, Canada, UK, Australia and New Zealand all have English as their dominant language, but American Sign Language (ASL), used in the US and English-speaking Canada, is derived from French Sign Language whereas the other three countries sign dialects of British, Australian and New Zealand Sign Language. Similarly, the sign languages of Spain and Mexico are very different, despite Spanish being the national language in each country, and the sign language used in Bolivia is based on ASL rather than any sign language that is used in a Spanish-speaking country. Variations also arise within a 'national' sign language which don't necessarily correspond to dialect differences in the national spoken language; rather, they can usually be correlated to the geographic location of residential schools for the deaf.
International Sign, formerly known as Gestuno, is used mainly at international deaf events such as the Deaflympics and meetings of the World Federation of the Deaf. While recent studies claim that International Sign is a kind of a pidgin, they conclude that it is more complex than a typical pidgin and indeed is more like a full sign language. While the more commonly used term is International Sign, it is sometimes referred to as Gestuno, or International Sign Pidgin and International Gesture (IG). International Sign is a term used by the World Federation of the Deaf and other international organisations.
In linguistic terms, sign languages are as rich and complex as any spoken language, despite the common misconception that they are not "real languages". Professional linguists have studied many sign languages and found that they exhibit the fundamental properties that exist in all languages.
Sign languages are not mime—in other words, signs are conventional, often arbitrary and do not necessarily have a visual relationship to their referent, much as most spoken language is not onomatopoeic. While iconicity is more systematic and widespread in sign languages than in spoken ones, the difference is not categorical. The visual modality allows the human preference for close connections between form and meaning, present but suppressed in spoken languages, to be more fully expressed. This does not mean that sign languages are a visual rendition of a spoken language. They have complex grammars of their own and can be used to discuss any topic, from the simple and concrete to the lofty and abstract.
Sign languages, like spoken languages, organize elementary, meaningless units called phonemes into meaningful semantic units. (These were once called cheremes (from the Greek word for "hand") in the case of sign languages, by analogy to the phonemes (from Greek for "voice") of spoken languages, but now also called phonemes, since the function is the same.) This is often called duality of patterning. As in spoken languages, these meaningless units are represented as (combinations of) features, although often also crude distinctions are made in terms of handshape (or handform), orientation, location (or place of articulation), movement, and non-manual expression. More generally, both sign and spoken languages share the characteristics that linguists have found in all natural human languages, such as transitoriness, semanticity, arbitrariness, productivity, and cultural transmission.
Common linguistic features of many sign languages are the occurrence of classifiers, a high degree of inflection by means of changes of movement, and a topic-comment syntax. More than spoken languages, sign languages can convey meaning by simultaneous means, e.g. by the use of space, two manual articulators, and the signer's face and body. Though there is still much discussion on the topic of iconicity in sign languages, classifiers are generally considered to be highly iconic, as these complex constructions "function as predicates that may express any or all of the following: motion, position, stative-descriptive, or handling information". It needs to be noted that the term classifier is not used by everyone working on these constructions. Across the field of sign language linguistics the same constructions are also referred with other terms.
Today, linguists study sign languages as true languages, part of the field of linguistics. However, the category "sign languages" was not added to the Linguistic Bibliography / Bibliographie Linguistique until the 1988 volume, when it appeared with 39 entries.
Always there is a common misconception that sign languages are somehow dependent on spoken languages: that they are spoken language expressed in signs, or that they were invented by hearing people. Similarities in language processing in the brain between signed and spoken languages further perpetuated this misconception. Hearing teachers in deaf schools, such as Charles-Michel de l'Épée or Thomas Hopkins Gallaudet, are often incorrectly referred to as "inventors" of sign language. Instead, sign languages, like all natural languages, are developed by the people who use them, in this case, deaf people, who may have little or no knowledge of any spoken language.
As a sign language develops, it sometimes borrows elements from spoken languages, just as all languages borrow from other languages that they are in contact with. Sign languages vary in how and how much they borrow from spoken languages. In many sign languages, a manual alphabet (fingerspelling) may be used in signed communication to borrow a word from a spoken language, by spelling out the letters. This is most commonly used for proper names of people and places; it is also used in some languages for concepts for which no sign is available at that moment, particularly if the people involved are to some extent bilingual in the spoken language. Fingerspelling can sometimes be a source of new signs, such as initialized signs, in which the handshape represents the first letter of a spoken word with the same meaning.
On the whole, though, sign languages are independent of spoken languages and follow their own paths of development. For example, British Sign Language (BSL) and American Sign Language (ASL) are quite different and mutually unintelligible, even though the hearing people of the United Kingdom and the United States share the same spoken language. The grammars of sign languages do not usually resemble those of spoken languages used in the same geographical area; in fact, in terms of syntax, ASL shares more with spoken Japanese than it does with English.
Similarly, countries which use a single spoken language throughout may have two or more sign languages, or an area that contains more than one spoken language might use only one sign language. South Africa, which has 11 official spoken languages and a similar number of other widely used spoken languages, is a good example of this. It has only one sign language with two variants due to its history of having two major educational institutions for the deaf which have served different geographic areas of the country.
Sign languages exploit the unique features of the visual medium (sight), but may also exploit tactile features (tactile sign languages). Spoken language is by and large linear; only one sound can be made or received at a time. Sign language, on the other hand, is visual and, hence, can use a simultaneous expression, although this is limited articulatorily and linguistically. Visual perception allows processing of simultaneous information.
One way in which many sign languages take advantage of the spatial nature of the language is through the use of classifiers. Classifiers allow a signer to spatially show a referent's type, size, shape, movement, or extent.
The large focus on the possibility of simultaneity in sign languages in contrast to spoken languages is sometimes exaggerated, though. The use of two manual articulators is subject to motor constraints, resulting in a large extent of symmetry or signing with one articulator only. Further, sign languages, just like spoken languages, depend on linear sequencing of signs to form sentences; the greater use of simultaneity is mostly seen in the morphology (internal structure of individual signs).
Sign languages convey much of their prosody through non-manual elements. Postures or movements of the body, head, eyebrows, eyes, cheeks, and mouth are used in various combinations to show several categories of information, including lexical distinction, grammatical structure, adjectival or adverbial content, and discourse functions.
At the lexical level, signs can be lexically specified for non-manual elements in addition to the manual articulation. For instance, facial expressions may accompany verbs of emotion, as in the sign for angry in Czech Sign Language. Non-manual elements may also be lexically contrastive. For example, in ASL (American Sign Language), facial components distinguish some signs from other signs. An example is the sign translated as not yet, which requires that the tongue touch the lower lip and that the head rotate from side to side, in addition to the manual part of the sign. Without these features the sign would be interpreted as late. Mouthings, which are (parts of) spoken words accompanying lexical signs, can also be contrastive, as in the manually identical signs for doctor and battery in Sign Language of the Netherlands.
While the content of a signed sentence is produced manually, many grammatical functions are produced non-manually (i.e., with the face and the torso). Such functions include questions, negation, relative clauses and topicalization. ASL and BSL use similar non-manual marking for yes/no questions, for example. They are shown through raised eyebrows and a forward head tilt.
Some adjectival and adverbial information is conveyed through non-manual elements, but what these elements are varies from language to language. For instance, in ASL a slightly open mouth with the tongue relaxed and visible in the corner of the mouth means 'carelessly', but a similar non-manual in BSL means 'boring' or 'unpleasant'.
Discourse functions such as turn taking are largely regulated through head movement and eye gaze. Since the addressee in a signed conversation must be watching the signer, a signer can avoid letting the other person have a turn by not looking at them, or can indicate that the other person may have a turn by making eye contact.
The first studies on iconicity in ASL were published in the late 1970s, and early 1980s. Many early sign language linguists rejected the notion that iconicity was an important aspect of the language. Though they recognized that certain aspects of the language seemed iconic, they considered this to be merely extralinguistic, a property which did not influence the language. However, mimetic aspects of sign language (signs that imitate, mimic, or represent) are found in abundance across a wide variety of sign languages. For example, deaf children learning sign language try to express something but do not know the associated sign, they will often invent an iconic sign that displays mimetic properties. Though it never disappears from a particular sign language, iconicity is gradually weakened as forms of sign languages become more customary and are subsequently grammaticized. As a form becomes more conventional, it becomes disseminated in a methodical way phonologically to the rest of the sign language community. Frishberg (1975) wrote a very influential paper addressing the relationship between arbitrariness and iconicity in ASL. She concluded that though originally present in many signs, iconicity is degraded over time through the application of grammatical processes. In other words, over time, the natural processes of regularization in the language obscures any iconically motivated features of the sign.
Some researchers have suggested that the properties of ASL give it a clear advantage in terms of learning and memory. Psychologist Roger Brown was one of the first to document this benefit. In his study, Brown found that when children were taught signs that had high levels of iconic mapping they were significantly more likely to recall the signs in a later memory task than when they were taught signs that had little or no iconic properties.
A central task for the pioneers of sign language linguistics was trying to prove that ASL was a real language and not merely a collection of gestures or "English on the hands." One of the prevailing beliefs at this time was that 'real languages' must consist of an arbitrary relationship between form and meaning. Thus, if ASL consisted of signs that had iconic form-meaning relationship, it could not be considered a real language. As a result, iconicity as a whole was largely neglected in research of sign languages.
The cognitive linguistics perspective rejects a more traditional definition of iconicity as a relationship between linguistic form and a concrete, real-world referent. Rather it is a set of selected correspondences between the form and meaning of a sign. In this view, iconicity is grounded in a language user's mental representation ("construal" in cognitive grammar). It is defined as a fully grammatical and central aspect of a sign language rather than a peripheral phenomenon.
The cognitive linguistics perspective allows for some signs to be fully iconic or partially iconic given the number of correspondences between the possible parameters of form and meaning. In this way, the Israeli Sign Language (ISL) sign for "ask" has parts of its form that are iconic ("movement away from the mouth" means "something coming from the mouth"), and parts that are arbitrary (the handshape, and the orientation).
Many signs have metaphoric mappings as well as iconic or metonymic ones. For these signs there are three way correspondences between a form, a concrete source and an abstract target meaning. The ASL sign LEARN has this three way correspondence. The abstract target meaning is "learning". The concrete source is putting objects into the head from books. The form is a grasping hand moving from an open palm to the forehead. The iconic correspondence is between form and concrete source. The metaphorical correspondence is between concrete source and abstract target meaning. Because the concrete source is connected to two correspondences linguistics refer to metaphorical signs as "double mapped".
Although sign languages have emerged naturally in deaf communities alongside or among spoken languages, they are unrelated to spoken languages and have different grammatical structures at their core.
Sign languages may be classified by how they arise.
In non-signing communities, home sign is not a full language, but closer to a pidgin. Home sign is amorphous and generally idiosyncratic to a particular family, where a deaf child does not have contact with other deaf children and is not educated in sign. Such systems are not generally passed on from one generation to the next. Where they are passed on, creolization would be expected to occur, resulting in a full language. However, home sign may also be closer to full language in communities where the hearing population has a gestural mode of language; examples include various Australian Aboriginal sign languages and gestural systems across West Africa, such as Mofu-Gudur in Cameroon.
A village sign language is a local indigenous language that typically arises over several generations in a relatively insular community with a high incidence of deafness, and is used both by the deaf and by a significant portion of the hearing community, who have deaf family and friends. The most famous of these is probably the extinct Martha's Vineyard Sign Language of the US, but there are also numerous village languages scattered throughout Africa, Asia, and America.
Deaf-community sign languages, on the other hand, arise where deaf people come together to form their own communities. These include school sign, such as Nicaraguan Sign Language, which develop in the student bodies of deaf schools which do not use sign as a language of instruction, as well as community languages such as Bamako Sign Language, which arise where generally uneducated deaf people congregate in urban centers for employment. At first, Deaf-community sign languages are not generally known by the hearing population, in many cases not even by close family members. However, they may grow, in some cases becoming a language of instruction and receiving official recognition, as in the case of ASL.
Both contrast with speech-taboo languages such as the various Aboriginal Australian sign languages, which are developed by the hearing community and only used secondarily by the deaf. It is doubtful whether most of these are languages in their own right, rather than manual codes of spoken languages, though a few such as Yolngu Sign Language are independent of any particular spoken language. Hearing people may also develop sign to communicate with speakers of other languages, as in Plains Indian Sign Language; this was a contact signing system or pidgin that was evidently not used by deaf people in the Plains nations, though it presumably influenced home sign.
Language contact and creolization is common in the development of sign languages, making clear family classifications difficult – it is often unclear whether lexical similarity is due to borrowing or a common parent language, or whether there was one or several parent languages, such as several village languages merging into a Deaf-community language. Contact occurs between sign languages, between sign and spoken languages (contact sign, a kind of pidgin), and between sign languages and gestural systems used by the broader community. One author has speculated that Adamorobe Sign Language, a village sign language of Ghana, may be related to the "gestural trade jargon used in the markets throughout West Africa", in vocabulary and areal features including prosody and phonetics.
The only comprehensive classification along these lines going beyond a simple listing of languages dates back to 1991. The classification is based on the 69 sign languages from the 1988 edition of Ethnologue that were known at the time of the 1989 conference on sign languages in Montreal and 11 more languages the author added after the conference.
|DGS-derived||1 or 2||–||–||–|
In his classification, the author distinguishes between primary and auxiliary sign languages as well as between single languages and names that are thought to refer to more than one language. The prototype-A class of languages includes all those sign languages that seemingly cannot be derived from any other language. Prototype-R languages are languages that are remotely modelled on a prototype-A language (in many cases thought to have been French Sign Language) by a process Kroeber (1940) called "stimulus diffusion". The families of BSL, DGS, JSL, LSF (and possibly LSG) were the products of creolization and relexification of prototype languages. Creolization is seen as enriching overt morphology in sign languages, as compared to reducing overt morphology in spoken languages.
Linguistic typology (going back to Edward Sapir) is based on word structure and distinguishes morphological classes such as agglutinating/concatenating, inflectional, polysynthetic, incorporating, and isolating ones.
Sign languages vary in word-order typology. For example, Austrian Sign Language, Japanese Sign Language and Indo-Pakistani Sign Language are Subject-object-verb while ASL is Subject-verb-object. Influence from the surrounding spoken languages is not improbable.
Sign languages tend to be incorporating classifier languages, where a classifier handshape representing the object is incorporated into those transitive verbs which allow such modification. For a similar group of intransitive verbs (especially motion verbs), it is the subject which is incorporated. Only in a very few sign languages (for instance Japanese Sign Language) are agents ever incorporated. in this way, since subjects of intransitives are treated similarly to objects of transitives, incorporation in sign languages can be said to follow an ergative pattern.
Brentari classifies sign languages as a whole group determined by the medium of communication (visual instead of auditory) as one group with the features monosyllabic and polymorphemic. That means, that one syllable (i.e. one word, one sign) can express several morphemes, e.g., subject and object of a verb determine the direction of the verb's movement (inflection).
Children who are exposed to a sign language from birth will acquire it, just as hearing children acquire their native spoken language.
The Critical Period hypothesis suggests that language, spoken or signed, is more easily acquired as a child at a young age versus an adult because of the plasticity of the child's brain. In a study done at the University of McGill, they found that American Sign Language users who acquired the language natively (from birth) performed better when asked to copy videos of ASL sentences than ASL users who acquired the language later in life. They also found that there are differences in the grammatical morphology of ASL sentences between the two groups, all suggesting that there is a very important critical period in learning signed languages.
The acquisition of non-manual features follows an interesting pattern: When a word that always has a particular non-manual feature associated with it (such as a wh- question word) is learned, the non-manual aspects are attached to the word but don’t have the flexibility associated with adult use. At a certain point, the non-manual features are dropped and the word is produced with no facial expression. After a few months, the non-manuals reappear, this time being used the way adult signers would use them.
Sign languages do not have a traditional or formal written form. Many deaf people do not see a need to write their own language.
Several ways to represent sign languages in written form have been developed.
So far, there is no consensus regarding the written form of sign language. Except for SignWriting, none are widely used. Maria Galea writes that SignWriting "is becoming widespread, uncontainable and untraceable. In the same way that works written in and about a well developed writing system such as the Latin script, the time has arrived where SW is so widespread, that it is impossible in the same way to list all works that have been produced using this writing system and that have been written about this writing system." In 2015, the Federal University of Santa Catarina accepted a dissertation written in Brazilian Sign Language using Sutton SignWriting for a master's degree in linguistics. The dissertation "The Writing of Grammatical Non-Manual Expressions in Sentences in LIBRAS Using the SignWriting System" by João Paulo Ampessan states that "the data indicate the need for [non-manual expressions] usage in writing sign language".
For a native signer, sign perception influences how the mind makes sense of their visual language experience. For example, a handshape may vary based on the other signs made before or after it, but these variations are arranged in perceptual categories during its development. The mind detects handshape contrasts but groups similar handshapes together in one category. Different handshapes are stored in other categories. The mind ignores some of the similarities between different perceptual categories, at the same time preserving the visual information within each perceptual category of handshape variation.
When Deaf people constitute a relatively small proportion of the general population, Deaf communities often develop that are distinct from the surrounding hearing community. These Deaf communities are very widespread in the world, associated especially with sign languages used in urban areas and throughout a nation, and the cultures they have developed are very rich.
One example of sign language variation in the Deaf community is Black ASL. This sign language was developed in the Black Deaf community as a variant during the American era of segregation and racism, where young Black Deaf students were forced to attend separate schools than their white Deaf peers.
On occasion, where the prevalence of deaf people is high enough, a deaf sign language has been taken up by an entire local community, forming what is sometimes called a "village sign language" or "shared signing community". Typically this happens in small, tightly integrated communities with a closed gene pool. Famous examples include:
In such communities deaf people are generally well integrated in the general community and not socially disadvantaged, so much so that it is difficult to speak of a separate "Deaf" community.
Many Australian Aboriginal sign languages arose in a context of extensive speech taboos, such as during mourning and initiation rites. They are or were especially highly developed among the Warlpiri, Warumungu, Dieri, Kaytetye, Arrernte, and Warlmanpa, and are based on their respective spoken languages.
A pidgin sign language arose among tribes of American Indians in the Great Plains region of North America (see Plains Indian Sign Language). It was used by hearing people to communicate among tribes with different spoken languages, as well as by deaf people. There are especially users today among the Crow, Cheyenne, and Arapaho. Unlike Australian Aboriginal sign languages, it shares the spatial grammar of deaf sign languages. In the 1500s, a Spanish expeditionary, Cabeza de Vaca, observed natives in the western part of modern-day Florida using sign language, and in the mid-16th century Coronado mentioned that communication with the Tonkawa using signs was possible without a translator. Whether or not these gesture systems reached the stage at which they could properly be called languages is still up for debate. There are estimates indicating that as many as 2% of Native Americans are seriously or completely deaf, a rate more than twice the national average.
Signs may also be used by hearing people for manual communication in secret situations, such as hunting, in noisy environments, underwater, through windows or at a distance.
Some sign languages have obtained some form of legal recognition, while others have no status at all. Sarah Batterbury has argued that sign languages should be recognized and supported not merely as an accommodation for the disabled, but as the communication medium of language communities.
One of the first demonstrations of the ability for telecommunications to help sign language users communicate with each other occurred when AT&T's videophone (trademarked as the "Picturephone") was introduced to the public at the 1964 New York World's Fair – two deaf users were able to freely communicate with each other between the fair and another city. However, video communication did not become widely available until sufficient bandwidth for the high volume of video data became available in the early 2000s.
The Internet now allows deaf people to talk via a video link, either with a special-purpose videophone designed for use with sign language or with "off-the-shelf" video services designed for use with broadband and an ordinary computer webcam. The special videophones that are designed for sign language communication may provide better quality than 'off-the-shelf' services and may use data compression methods specifically designed to maximize the intelligibility of sign languages. Some advanced equipment enables a person to remotely control the other person's video camera, in order to zoom in and out or to point the camera better to understand the signing.
In order to facilitate communication between deaf and hearing people, sign language interpreters are often used. Such activities involve considerable effort on the part of the interpreter, since sign languages are distinct natural languages with their own syntax, different from any spoken language.
The interpretation flow is normally between a sign language and a spoken language that are customarily used in the same country, such as French Sign Language (LSF) and spoken French in France, Spanish Sign Language (LSE) to spoken Spanish in Spain, British Sign Language (BSL) and spoken English in the U.K., and American Sign Language (ASL) and spoken English in the US and most of anglophone Canada (since BSL and ASL are distinct sign languages both used in English-speaking countries), etc. Sign language interpreters who can translate between signed and spoken languages that are not normally paired (such as between LSE and English), are also available, albeit less frequently.
With recent developments in artificial intelligence in computer science, some recent deep learning based machine translation algorithms have been developed which automatically translate short videos containing sign language sentences (often simple sentence consists of only one clause) directly to written language.
Interpreters may be physically present with both parties to the conversation but, since the technological advancements in the early 2000s, provision of interpreters in remote locations has become available. In video remote interpreting (VRI), the two clients (a sign language user and a hearing person who wish to communicate with each other) are in one location, and the interpreter is in another. The interpreter communicates with the sign language user via a video telecommunications link, and with the hearing person by an audio link. VRI can be used for situations in which no on-site interpreters are available.
However, VRI cannot be used for situations in which all parties are speaking via telephone alone. With video relay service (VRS), the sign language user, the interpreter, and the hearing person are in three separate locations, thus allowing the two clients to talk to each other on the phone through the interpreter.
Sign language is sometimes provided for television programmes. The signer usually appears in the bottom corner of the screen, with the programme being broadcast full size or slightly shrunk away from that corner. Typically for press conferences such as those given by the Mayor of New York City, the signer appears to stage left or right of the public official to allow both the speaker and signer to be in frame at the same time.
In traditional analogue broadcasting, many programmes are repeated, often in the early hours of the morning, with the signer present rather than have them appear at the main broadcast time. This is due to the distraction they cause to those not wishing to see the signer. On the BBC, many programmes that broadcast late at night or early in the morning are signed. Some emerging television technologies allow the viewer to turn the signer on and off in a similar manner to subtitles and closed captioning.
Legal requirements covering sign language on television vary from country to country. In the United Kingdom, the Broadcasting Act 1996 addressed the requirements for blind and deaf viewers, but has since been replaced by the Communications Act 2003.
As with any spoken language, sign languages are also vulnerable to becoming endangered. For example, a sign language used by a small community may be endangered and even abandoned as users shift to a sign language used by a larger community, as has happened with Hawai'i Sign Language, which is almost extinct except for a few elderly signers. Even national sign languages can be endangered; for example, New Zealand Sign Language is losing users. Methods are being developed to assess the language vitality of sign languages.
There are a number of communication systems that are similar in some respects to sign languages, while not having all the characteristics of a full sign language, particularly its grammatical structure. Many of these are either precursors to natural sign languages or are derived from them.
When Deaf and Hearing people interact, signing systems may be developed that use signs drawn from a natural sign language but used according to the grammar of the spoken language. In particular, when people devise one-for-one sign-for-word correspondences between spoken words (or even morphemes) and signs that represent them, the system that results is a manual code for a spoken language, rather than a natural sign language. Such systems may be invented in an attempt to help teach Deaf children the spoken language, and generally are not used outside an educational context.
It has become popular for hearing parents to teach signs (from ASL or some other sign language) to young hearing children. Since the muscles in babies' hands grow and develop quicker than their mouths, signs can be a beneficial option for better communication. Babies can usually produce signs before they can speak. This reduces the confusion between parents when trying to figure out what their child wants. When the child begins to speak, signing is usually abandoned, so the child does not progress to acquiring the grammar of the sign language.
This is in contrast to hearing children who grow up with Deaf parents, who generally acquire the full sign language natively, the same as Deaf children of Deaf parents.
Informal, rudimentary sign systems are sometimes developed within a single family. For instance, when hearing parents with no sign language skills have a deaf child, the child may develop a system of signs naturally, unless repressed by the parents. The term for these mini-languages is home sign (sometimes "home sign" or "kitchen sign").
Home sign arises due to the absence of any other way to communicate. Within the span of a single lifetime and without the support or feedback of a community, the child naturally invents signs to help meet his or her communication needs, and may even develop a few grammatical rules for combining short sequences of signs. Still, this kind of system is inadequate for the intellectual development of a child and it comes nowhere near meeting the standards linguists use to describe a complete language. No type of home sign is recognized as a full language.
There have been several notable examples of scientists teaching signs to non-human primates in order to communicate with humans, such as common chimpanzees, gorillas and orangutans. However, linguists generally point out that this does not constitute knowledge of a human language (as a complete system, rather than simply signs/words). Notable examples of animals who have learned signs include:
One theory of the evolution of human language states that it developed first as a gestural system, which later shifted to speech. An important question for this gestural theory is what caused the shift to vocalization.
The language is considered to be endangered. 9,600 deaf people in Hawaii now use American Sign Language with a few local signs for place-names and cultural items.
American Sign Language (ASL) is a natural language that serves as the predominant sign language of Deaf communities in the United States and most of Anglophone Canada. Besides North America, dialects of ASL and ASL-based creoles are used in many countries around the world, including much of West Africa and parts of Southeast Asia. ASL is also widely learned as a second language, serving as a lingua franca. ASL is most closely related to French Sign Language (LSF). It has been proposed that ASL is a creole language of LSF, although ASL shows features atypical of creole languages, such as agglutinative morphology.
ASL originated in the early 19th century in the American School for the Deaf (ASD) in West Hartford, Connecticut, from a situation of language contact. Since then, ASL use has propagated widely via schools for the deaf and Deaf community organizations. Despite its wide use, no accurate count of ASL users has been taken, though reliable estimates for American ASL users range from 250,000 to 500,000 persons, including a number of children of deaf adults. ASL users face stigma due to beliefs in the superiority of oral language to sign language, compounded by the fact that ASL is often glossed in English due to the lack of a standard writing system.
ASL signs have a number of phonemic components, including movement of the face and torso as well as the hands. ASL is not a form of pantomime, but iconicity does play a larger role in ASL than in spoken languages. English loan words are often borrowed through fingerspelling, although ASL grammar is unrelated to that of English. ASL has verbal agreement and aspectual marking and has a productive system of forming agglutinative classifiers. Many linguists believe ASL to be a subject–verb–object (SVO) language, but there are several alternative proposals to account for ASL word order.Black American Sign Language
Black American Sign Language (BASL) or Black Sign Variation (BSV) is a dialect of American Sign Language (ASL) used most commonly by deaf African Americans in the United States. The divergence from ASL was influenced largely by the segregation of schools in the American South. Like other schools at the time, schools for the deaf were segregated based upon race, creating two language communities among deaf signers: White deaf signers at White schools and Black deaf signers at Black schools. Today, BASL is still used by signers in the South despite public schools having been legally desegregated since 1954.
Linguistically, BASL differs from other varieties of ASL in its phonology, syntax, and vocabulary. BASL tends to have a larger signing space, meaning that some signs are produced further away from the body than in other dialects. Signers of BASL also tend to prefer two-handed variants of signs, while signers of ASL tend to prefer one-handed variants. Some signs are different in BASL as well, with some borrowings from African American English.British Sign Language
British Sign Language (BSL) is a sign language used in the United Kingdom (UK), and is the first or preferred language of some deaf people in the UK. There are 125,000 deaf adults in the UK who use BSL, plus an estimated 20,000 children. In 2011, 15,000 people living in England and Wales reported themselves using BSL as their main language. The language makes use of space and involves movement of the hands, body, face, and head. Many thousands of people who are not deaf also use BSL, as hearing relatives of deaf people, sign language interpreters or as a result of other contact with the British deaf community.Chinese Sign Language
Modern Chinese Sign Language (or CSL or ZGS; simplified Chinese: 中国手语; traditional Chinese: 中國手語; pinyin: Zhōngguó Shǒuyǔ) is the deaf sign language of the People's Republic of China. It is unrelated to Taiwanese Sign Language.
The first deaf school using Chinese Sign Language was created by Nellie Thompson Mills, the wife of American missionary C.R. Mills, in the year 1887. However, American Sign Language (ASL) did not influence Chinese Sign Language (CSL) much. Schools, workshops and farms in different areas for the Deaf are the main ways that CSL has been able to spread in China so well. Other Deaf who are not connected to these gathering places tend to use sets of gestures developed in their own homes, known as home sign.
The Chinese National Association of the Deaf (ROC) was created by the Deaf People mostly from the United States. The biggest reason for the organization of the Deaf in China was to raise quality of living for the Deaf which was behind the quality of living standards provided for the other disabled. The members of the ROC worked together to better the welfare of the Deaf, to encourage education of Deaf and Chinese Sign Language, and to promote the Deaf Community in China.Filipino Sign Language
Filipino Sign Language (FSL) or Philippine Sign Language, is a sign language originating in the Philippines. Like other sign languages, FSL is a unique language with its own grammar, syntax and morphology; it is neither based on nor resembles Filipino or English. Some researchers consider the indigenous signs of FSL to be at risk of being lost due to the increasing influence of foreign sign languages such as ASL.The Republic Act 11106 or The Filipino Sign Language Act, effective November 27, 2018, declared FSL as the national sign language of the Filipino Deaf.French Sign Language family
The French Sign Language (LSF) or Francosign family is a language family of sign languages which includes French Sign Language and American Sign Language.
The FSL family descends from Old French Sign Language, which developed among the deaf community in Paris. The earliest mention of Old French Sign Language is by the abbé Charles-Michel de l'Épée in the late 17th century, but it could have existed for centuries prior. Several European sign languages, such as Russian Sign Language, derive from it, as does American Sign Language, established when French educator Laurent Clerc taught his language at the American School for the Deaf. Others, such as Spanish Sign Language, are thought to be related to French Sign Language even if they are not directly descendant from it.German Sign Language family
The German Sign Language family is a small language family of sign languages, including German Sign Language, Polish Sign Language and probably Israeli Sign Language. The latter also had influence from Austrian Sign Language, which is unrelated, and the parentage is not entirely clear.Hawai'i Sign Language
Hawaiʻi Sign Language (HSL), also known as Old Hawaiʻi Sign Language and Pidgin Sign Language (PSL), is an indigenous sign language used in Hawaiʻi. Although historical records document its presence on the islands as early as the 1820s, it was not formally recognized until 2013 by linguists at the University of Hawai'i. It is the first new language to be uncovered within the United States since the 1930s. Linguistic experts believe HSL may be the last undiscovered language in the country.Although previously believed to be related to American Sign Language (ASL), the two languages are in fact unrelated. The initial research team interviewed 19 Deaf people and two children of Deaf parents on four islands. It was found that eighty percent of HSL vocabulary is different from American Sign Language, proving that HSL is an independent language. Additionally, there is a HSL-ASL creole, Creole Hawai'i Sign Language (CHSL) which is used by approximately 40 individuals in the generations between those who signed HSL exclusively and those who sign ASL exclusively. However, since the 1940s ASL has almost fully replaced the use of HSL on the islands of Hawai'i and CHSL is likely to also be lost in the next 50 years.Prior to the recognition of HSL as a distinct language in 2013, it was an undocumented language. HSL is at risk of extinction due to its low number of signers and the adoption of ASL. With fewer than 30 signers remaining worldwide, HSL is considered critically endangered. Without documentation and revitalization efforts, such as the ongoing efforts initiated by Dr. James Woodward, Dr. Barbara Earth, and Linda Lambrecht, this language may become dormant or extinct.Indo-Pakistani Sign Language
Indo-Pakistani Sign Language (IPSL) is the predominant sign language in South Asia, used by at least several hundred thousand deaf signers (2003). As with many sign languages, it is difficult to estimate numbers with any certainty, as the Census of India does not list sign languages and most studies have focused on the north and on urban areas.The Indian deaf population of 1.1 million is 98% illiterate. In line with oralist philosophy, deaf schools attempt early intervention with hearing aids etc., but these are largely dysfunctional in an impoverished society. As of 1986, only 2% of deaf children attended school.Pakistan has a deaf population of 0.24 million, which is approximately 7.4% of the overall disabled population in the country.Indonesian sign languages
Indonesian Sign Language, or Bahasa Isyarat Indonesia (BISINDO), is any of several related deaf sign languages of Indonesia, at least on the island of Java. It is based on American Sign Language (perhaps via Malaysian Sign Language), with local admixture in different cities. Although presented as a coherent language when advocating for recognition by the Indonesian government and use in education, the varieties used in different cities may not be mutually intelligible.
Specifically, the only study to have investigated this, Isma (2012), found that the sign languages of Jakarta and Yogyakarta are related but distinct languages, that they remain 65% lexically cognate but are grammatically distinct and apparently diverging. They are different enough that Isma's consultants in Hong Kong resorted to Hong Kong Sign Language to communicate with each other. Word order in Yogyakarta tends to be verb-final (SOV), whereas in Jakarta it tends to be verb-medial (SVO) when either noun phrase could be subject or object, and free otherwise. The varieties in other cities were not investigated.
Rather than sign language, education currently uses a form of manually-coded Malay known as Sistem Isyarat Bahasa Indonesia (SIBI).International Sign
International Sign (IS) is a pidgin sign language which is used in a variety of different contexts, particularly at international meetings such as the World Federation of the Deaf (WFD) congress, events such as the Deaflympics and the Miss & Mister Deaf World, in video clips produced by Deaf people and watched by other Deaf people from around the world, and informally when travelling and socialising.
Many people consider International Sign as a universal sign language, but this is very hard to establish considering Ethnologue lists 142 different types of sign language.Kata Kolok
Kata Kolok (literally "deaf talk"), also known as Benkala Sign Language and Balinese Sign Language, is a village sign language which is indigenous to two neighbouring villages in northern Bali, Indonesia. The main village, Bengkala, has had high incidences of deafness for over seven generations. Notwithstanding the biological time depth of the recessive mutation that causes deafness, the first substantial cohort of deaf signers did not occur until five generations ago, and this event marks the emergence of Kata Kolok (de Vos 2012).
Kata Kolok is unrelated to spoken Balinese and lacks certain contact sign phenomena that often arise when a sign language and an oral language are in close contact, such as fingerspelling and mouthing. It is also unrelated to other sign languages. It differs from other known sign languages in a number of respects: Signers make extensive use of cardinal directions and real-world locations to organize the signing space, and they do not use a metaphorical “time line” for time reference. Kata Kolok is the only known sign language which predominantly deploys the absolute Frame of Reference.
Deaf people in the village express themselves using special cultural forms such as deaf dance and martial arts and occupy special ritual and social roles, including digging graves and maintaining water pipes. Deaf and hearing villagers alike share a belief in a deaf god.
The sign language has been acquired by at least five generations of deaf, native signers and features in all aspects of village life, including political, professional, educational, and religious settings. The Max Planck Institute for Psycholinguistics (MPI) and the International Institute for Sign Languages and Deaf Studies have archived over 100 hours of Kata Kolok video data. The metadata of this corpus are accessible online (see www.mpi.nl).Language interpretation
Interpreting is a translational activity in which one produces a first and final translation on the basis of a one-time exposure to an expression in a source language.
The most common two modes of interpreting are simultaneous interpreting, which is done at the time of the exposure to the source language, and consecutive interpreting, which is done at breaks to this exposure.
Interpreting is an ancient human activity which predates the invention of writing. However, the origins of the profession of interpreting date back to less than a century ago.Languages of the United States
The most commonly used language in the United States is English (specifically, American English), which is the de facto national language. Nonetheless, many other languages are also spoken, or historically have been spoken, in the United States. These include indigenous languages, languages brought to the country by colonists, enslaved people and immigrants from Europe, Africa and Asia. There are also several languages, including creoles and sign languages, that developed in the United States. Approximately 430 languages are spoken or signed by the population, of which 176 are indigenous to the area. Fifty-two languages formerly spoken in the country's territory are now extinct.Based on annual data from the American Community Survey (ACS), the U.S. Census Bureau regularly publishes information on the most common languages spoken at home. It also reports the English speaking ability of people who speak a language other than English at home. In 2015, the U.S. Census Bureau published information on the number of speakers of over 350 languages as surveyed by the ACS from 2009 to 2013, but it does not regularly tabulate and report data for that many languages.
According to the ACS in 2016, the most common languages spoken at home by people aged five years of age or older are as follows (the most recent data can be found via the U.S. Census Bureau's American Fact-finder):
English only – 229.7 million
Spanish – 40.5 million
Chinese (including Mandarin and Cantonese) – 3.4 million
Tagalog (including Filipino) – 1.7 million
Vietnamese – 1.5 million
Arabic – 1.2 million
French – 1.2 million
Korean – 1.1 million
Russian – 0.91 million
German – 0.91 million
Haitian Creole – 0.86 million
Hindi – 0.81 million
Portuguese – 0.77 million
Italian – 0.58 million
Polish – 0.54 million
Urdu – 0.47 million
Japanese – 0.46 million
Persian (including Farsi and Dari) – 0.44 million
Gujarati – 0.41 million
Telugu – 0.37 million
Bengali – 0.32 million
Tai–Kadai (including Thai and Lao) – 0.31 million
Greek – 0.29 million
Punjabi – 0.29 million
Tamil – 0.27 million
Armenian – 0.24 million
Serbo-Croatian (including Bosnian, Croatian, Montenegrin, and Serbian) – 0.24 million
Hebrew – 0.23 million
Hmong – 0.22 million
Bantu (including Swahili) – 0.22 million
Khmer – 0.20 million
Navajo – 0.16 millionThe ACS is not a full census but an annual sample-based survey conducted by the U.S. Census Bureau. The language statistics are based on responses to a three-part question asked about all members of a target U.S. household who are at least five years old. The first part asks if they "speak a language other than English at home." If so, the head of household or main respondent is asked to report which language each member speaks in the home, and how well each individual speaks English. It does not ask how well individuals speak any other language of the household. Thus, some respondents might have only a limited speaking ability of that language. In addition, it is difficult to make historical comparisons of the numbers of speakers because language questions used by the U.S. Census changed numerous times before 1980.The ACS does not tabulate the number of people who report the use of American Sign Language at home, so such data must come from other sources. While modern estimates indicate that American Sign Language was signed by as many as 500,000 Americans in 1972 (the last official survey of sign language), estimates as recently as 2011 were closer to 100,000. Various cultural factors, such as passage of the Americans with Disabilities Act, have resulted in far greater educational opportunities for hearing-impaired children, which could double or triple the number of current users of American Sign Language.List of sign languages
There are perhaps three hundred sign languages in use around the world today. The number is not known with any confidence; new sign languages emerge frequently through creolization and de novo (and occasionally through language planning). In some countries, such as Sri Lanka and Tanzania, each school for the deaf may have a separate language, known only to its students and sometimes denied by the school; on the other hand, countries may share sign languages, although sometimes under different names (Croatian and Serbian, Indian and Pakistani). Deaf sign languages also arise outside educational institutions, especially in village communities with high levels of congenital deafness, but there are significant sign languages developed for the hearing as well, such as the speech-taboo languages used in aboriginal Australia. Scholars are doing field surveys to identify the world's sign languages.The following list is grouped into three sections :
Deaf sign languages, which are the preferred languages of Deaf communities around the world; these include village sign languages, shared with the hearing community, and Deaf-community sign languages
Auxiliary sign languages, which are not native languages but sign systems of varying complexity, used alongside spoken languages. Simple gestures are not included, as they do not constitute language.
Signed modes of spoken languages, also known as manually coded languages, which are bridges between signed and spoken languagesThe list of deaf sign languages is sorted regionally and alphabetically, and such groupings should not be taken to imply any genetic relationships between these languages (see List of language families).Papua New Guinean Sign Language
"Sign language" was made the fourth official language of Papua New Guinea in 2015. In practice, this means the local form of sign language then being developed and standardized (Papua New Guinean Sign Language, or "PNGSL").The language has been called "Melanesian Sign Language". However, this does not translate what the community calls it, and is misleading because it is not used elsewhere in Melanesia.Plains Indian Sign Language
Plains Indian Sign Language (PISL), also known as Plains Sign Talk, Plains Sign Language and First Nation Sign Language, is a trade language (or international auxiliary language), formerly trade pidgin, that was once the lingua franca across central Canada, central and western United States and northern Mexico, used among the various Plains Nations. It was also used for story-telling, oratory, various ceremonies, and by deaf people for ordinary daily use. It is falsely believed to be a manually coded language or languages; however, there is not substantive evidence establishing a connection between any spoken language and Plains Sign Talk.
The name 'Plains Sign Talk' is preferred in Canada, with 'Indian' being considered pejorative by many. Hence, publications and reports on the language vary in naming conventions according to origin.Profanity in American Sign Language
American Sign Language (ASL), the sign language used by the deaf community throughout most of North America, has a rich vocabulary of terms, which include profanity. Within deaf culture, there is a distinction drawn between signs used to curse versus signs that are used to describe sexual acts. In usage, signs to describe detailed sexual behavior are highly taboo due to their graphic nature. As for the signs themselves, some signs do overlap, but they may also vary according to usage. For example, the sign for "shit" when used to curse is different from the sign for "shit" when used to describe the bodily function or the fecal matter.Varieties of American Sign Language
American Sign Language (ASL) developed in the United States and Canada, but has spread around the world. Local varieties have developed in many countries, but there is little research on which should be considered dialects of ASL (such as Bolivian Sign Language) and which have diverged to the point of being distinct languages (such as Malaysian Sign Language).
The following are sign language varieties of ASL in countries other than the US and Canada, languages based on ASL with substratum influence from local sign languages, and mixed languages in which ASL is a component. Distinction follow political boundaries, which may not correspond to linguistic boundaries.
^a Sign-language names reflect the region of origin. Natural sign languages are not related to the spoken language used in the same region. For example, French Sign Language originated in France, but is not related to French. ^b Denotes the number (if known) of languages within the family. No further information is given on these languages.