Character encoding

Character encoding is used to represent a repertoire of characters by some kind of encoding system.[1] Depending on the abstraction level and context, corresponding code points and the resulting code space may be regarded as bit patterns, octets, natural numbers, electrical pulses, etc. A character encoding is used in computation, data storage, and transmission of textual data. "Character set", "character map", "codeset" and "code page" are related, but not identical, terms.

Early character codes associated with the optical or electrical telegraph could only represent a subset of the characters used in written languages, sometimes restricted to upper case letters, numerals and some punctuation only. The low cost of digital representation of data in modern computer systems allows more elaborate character codes (such as Unicode) which represent most of the characters used in many written languages. Character encoding using internationally accepted standards permits worldwide interchange of text in electronic form.

History

The history of character codes illustrates the evolving need for machine-mediated character-based symbolic information over a distance, using once-novel electrical means. The earliest codes were based upon manual and hand-written encoding and cyphering systems, such as Bacon's cipher, Braille, International maritime signal flags, and the 4-digit encoding of Chinese characters for a Chinese telegraph code (Hans Schjellerup, 1869). With the adoption of electrical and electro-mechanical techniques these earliest codes were adapted to the new capabilities and limitations of the early machines. The earliest well-known electrically-transmitted character code, Morse code, introduced in the 1840s, used a system of four "symbols" (short signal, long signal, short space, long space) to generate codes of variable length. Though most commercial use of Morse code was via machinery, it was also used as a manual code, generatable by hand on a telegraph key and decipherable by ear, and persists in amateur radio use. Most codes are of fixed per-character length or variable-length sequences of fixed-length codes (e.g. Unicode). [2]

Common examples of character encoding systems include Morse code, the Baudot code, the American Standard Code for Information Interchange (ASCII) and Unicode. Unicode, a well defined and extensible encoding system, has supplanted most earlier character encodings, but the path of code development to the present is fairly well known.

The Baudot code, a five-bit encoding, was created by Émile Baudot in 1870, patented in 1874, modified by Donald Murray in 1901, and standardized by CCITT as International Telegraph Alphabet No. 2 (ITA2) in 1930. The name "baudot" has been erroneously applied to ITA2 and its many variants. ITA2 suffered from many shortcomings and was often "improved" by many equipment manufacturers, sometimes creating compatibility issues. In 1959 the U.S. military defined its Fieldata code, a six-or seven-bit code, introduced by the U.S. Army Signal Corps. While Fieldata addressed many of the then-modern issues (e.g. letter and digit codes arranged for machine collation), Fieldata fell short of its goals and was short-lived. In 1963 the first ASCII (American Standard Code for Information Interchange) code was released (X3.4-1963) by the ASCII committee (which contained at least one member of the Fieldata committee, W. F. Leubbert) which addressed most of the shortcomings of Fieldata, using a simpler code. Many of the changes were subtle, such as collatable character sets within certain numeric ranges. ASCII63 was a success, widely adopted by industry, and with the followup issue of the 1967 ASCII code (which added lower-case letters and fixed some "control code" issues) ASCII67 was adopted fairly widely. ASCII67's American-centric nature was somewhat addressed in the European ECMA-6 standard, which persists today as the base encoding for the UNICODE extended encoding strings. [3]

Somewhat historically isolated, IBM's Binary Coded Decimal (BCD) was a six-bit encoding scheme used by IBM in as early as 1959 in its 1401 and 1620 computers, and in its 7000 Series (for example, 704, 7040, 709 and 7090 computers), as well as in associated peripherals. BCD extended existing simple four-bit numeric encoding to include alphabetic and special characters, mapping it easily to punch-card encoding which was already in widespread use. It was the precursor to EBCDIC. For the most part, IBMs codes were used primarily with IBM equipment, which was more or less a closed ecosystem, and did not see much adoption outside of IBM "circles". IBM's Extended Binary Coded Decimal Interchange Code (usually abbreviated as EBCDIC) is an eight-bit encoding scheme developed in 1963.

The limitations of such sets soon became apparent, and a number of ad hoc methods were developed to extend them. The need to support more writing systems for different languages, including the CJK family of East Asian scripts, required support for a far larger number of characters and demanded a systematic approach to character encoding rather than the previous ad hoc approaches.

In trying to develop universally interchangeable character encodings, researchers in the 1980s faced the dilemma that on the one hand, it seemed necessary to add more bits to accommodate additional characters, but on the other hand, for the users of the relatively small character set of the Latin alphabet (who still constituted the majority of computer users), those additional bits were a colossal waste of then-scarce and expensive computing resources (as they would always be zeroed out for such users).

The compromise solution that was eventually found and developed into Unicode was to break the assumption (dating back to telegraph codes) that each character should always directly correspond to a particular sequence of bits. Instead, characters would first be mapped to a universal intermediate representation in the form of abstract numbers called code points. Code points would then be represented in a variety of ways and with various default numbers of bits per character (code units) depending on context. To encode code points higher than the length of the code unit, such as above 256 for 8-bit units, the solution was to implement variable-width encodings where an escape sequence would signal that subsequent bits should be parsed as a higher code point.

Terminology

Terminology related to character encoding
KB Dubeolsik for Old Hangul (NG3)
  • A character is a minimal unit of text that has semantic value.
  • A character set is a collection of characters that might be used by multiple languages. Example: The Latin character set is used by English and most European languages, though the Greek character set is used only by the Greek language.
  • A coded character set is a character set in which each character corresponds to a unique number.
  • A code point of a coded character set is any allowed value in the character set.
  • A code unit is a bit sequence used to encode each character of a repertoire within a given encoding form.
Character repertoire (the abstract set of characters)

The character repertoire is an abstract set of more than one million characters found in a wide variety of scripts including Latin, Cyrillic, Chinese, Korean, Japanese, Hebrew, and Aramaic.

Other symbols such as musical notation are also included in the character repertoire. Both the Unicode and GB18030 standards have a character repertoire. As new characters are added to one standard, the other standard also adds those characters, to maintain parity.

The code unit size is equivalent to the bit measurement for the particular encoding:

  • A code unit in US-ASCII consists of 7 bits;
  • A code unit in UTF-8, EBCDIC and GB18030 consists of 8 bits;
  • A code unit in UTF-16 consists of 16 bits;
  • A code unit in UTF-32 consists of 32 bits.

Example of a code unit: Consider a string of the letters "abc" followed by U+10400 𐐀 DESERET CAPITAL LETTER LONG I (represented with 1 char32_t, 2 char16_t or 4 char8_t). That string contains:

  • four characters;
  • four code points
  • either:
    four code units in UTF-32 (00000061, 00000062, 00000063, 00010400)
    five code units in UTF-16 (0061, 0062, 0063, d801, dc00), or
    seven code units in UTF-8 (61, 62, 63, f0, 90, 90, 80).

To express a character in Unicode, the hexadecimal value is prefixed with the string 'U+'. The range of valid code points for the Unicode standard is U+0000 to U+10FFFF, inclusive, divided in 17 planes, identified by the numbers 0 to 16. Characters in the range U+0000 to U+FFFF are in plane 0, called the Basic Multilingual Plane (BMP). This plane contains most commonly-used characters. Characters in the range U+10000 to U+10FFFF in the other planes are called supplementary characters.

The following table shows examples of code point values:

Character Unicode code point Glyph
Latin A U+0041 Α
Latin sharp S U+00DF ß
Han for East U+6771
Ampersand U+0026 &
Inverted exclamation mark U+00A1 ¡
Section sign U+00A7 §

A code point is represented by a sequence of code units. The mapping is defined by the encoding. Thus, the number of code units required to represent a code point depends on the encoding:

  • UTF-8: code points map to a sequence of one, two, three or four code units.
  • UTF-16: code units are twice as long as 8-bit code units. Therefore, any code point with a scalar value less than U+10000 are encoded with a single code unit. Code points with a value U+10000 or higher require two code units each. These pairs of code units have a unique term in UTF-16: "Unicode surrogate pairs".
  • UTF-32: the 32-bit code unit is large enough that every code point is represented as a single code unit.
  • GB18030: multiple code units per code point are common, because of the small code units. Code points are mapped to one, two, or four code units.[4]

Unicode encoding model

Unicode and its parallel standard, the ISO/IEC 10646 Universal Character Set, together constitute a modern, unified character encoding. Rather than mapping characters directly to octets (bytes), they separately define what characters are available, corresponding natural numbers (code points), how those numbers are encoded as a series of fixed-size natural numbers (code units), and finally how those units are encoded as a stream of octets. The purpose of this decomposition is to establish a universal set of characters that can be encoded in a variety of ways.[5] To describe this model correctly requires more precise terms than "character set" and "character encoding." The terms used in the modern model follow:[5]

A character repertoire is the full set of abstract characters that a system supports. The repertoire may be closed, i.e. no additions are allowed without creating a new standard (as is the case with ASCII and most of the ISO-8859 series), or it may be open, allowing additions (as is the case with Unicode and to a limited extent the Windows code pages). The characters in a given repertoire reflect decisions that have been made about how to divide writing systems into basic information units. The basic variants of the Latin, Greek and Cyrillic alphabets can be broken down into letters, digits, punctuation, and a few special characters such as the space, which can all be arranged in simple linear sequences that are displayed in the same order they are read. But even with these alphabets, diacritics pose a complication: they can be regarded either as part of a single character containing a letter and diacritic (known as a precomposed character), or as separate characters. The former allows a far simpler text handling system but the latter allows any letter/diacritic combination to be used in text. Ligatures pose similar problems. Other writing systems, such as Arabic and Hebrew, are represented with more complex character repertoires due to the need to accommodate things like bidirectional text and glyphs that are joined together in different ways for different situations.

A coded character set (CCS) is a function that maps characters to code points (each code point represents one character). For example, in a given repertoire, the capital letter "A" in the Latin alphabet might be represented by the code point 65, the character "B" to 66, and so on. Multiple coded character sets may share the same repertoire; for example ISO/IEC 8859-1 and IBM code pages 037 and 500 all cover the same repertoire but map them to different code points.

A character encoding form (CEF) is the mapping of code points to code units to facilitate storage in a system that represents numbers as bit sequences of fixed length (i.e. practically any computer system). For example, a system that stores numeric information in 16-bit units can only directly represent code points 0 to 65,535 in each unit, but larger code points (say, 65,536 to 1.4 million) could be represented by using multiple 16-bit units. This correspondence is defined by a CEF.

Next, a character encoding scheme (CES) is the mapping of code units to a sequence of octets to facilitate storage on an octet-based file system or transmission over an octet-based network. Simple character encoding schemes include UTF-8, UTF-16BE, UTF-32BE, UTF-16LE or UTF-32LE; compound character encoding schemes, such as UTF-16, UTF-32 and ISO/IEC 2022, switch between several simple schemes by using byte order marks or escape sequences; compressing schemes try to minimise the number of bytes used per code unit (such as SCSU, BOCU, and Punycode).

Although UTF-32BE is a simpler CES, most systems working with Unicode use either UTF-8, which is backward compatible with fixed-width ASCII and maps Unicode code points to variable-width sequences of octets, or UTF-16BE, which is backward compatible with fixed-width UCS-2BE and maps Unicode code points to variable-width sequences of 16-bit words. See comparison of Unicode encodings for a detailed discussion.

Finally, there may be a higher level protocol which supplies additional information to select the particular variant of a Unicode character, particularly where there are regional variants that have been 'unified' in Unicode as the same character. An example is the XML attribute xml:lang.

The Unicode model uses the term character map for historical systems which directly assign a sequence of characters to a sequence of bytes, covering all of CCS, CEF and CES layers.[5]

Character sets, character maps and code pages

Historically, the terms "character encoding", "character map", "character set" and "code page" were synonymous in computer science, as the same standard would specify a repertoire of characters and how they were to be encoded into a stream of code units – usually with a single character per code unit. But now the terms have related but distinct meanings,[6] due to efforts by standards bodies to use precise terminology when writing about and unifying many different encoding systems.[5] Regardless, the terms are still used interchangeably, with character set being nearly ubiquitous.

A "code page" usually means a byte-oriented encoding, but with regard to some suite of encodings (covering different scripts), where many characters share the same codes in most or all those code pages. Well-known code page suites are "Windows" (based on Windows-1252) and "IBM"/"DOS" (based on code page 437), see Windows code page for details. Most, but not all, encodings referred to as code pages are single-byte encodings (but see octet on byte size.)

IBM's Character Data Representation Architecture (CDRA) designates with coded character set identifiers (CCSIDs) and each of which is variously called a "charset", "character set", "code page", or "CHARMAP".[5]

The term "code page" does not occur in Unix or Linux where "charmap" is preferred, usually in the larger context of locales.

Contrasted to CCS above, a "character encoding" is a map from abstract characters to code words. A "character set" in HTTP (and MIME) parlance is the same as a character encoding (but not the same as CCS).

"Legacy encoding" is a term sometimes used to characterize old character encodings, but with an ambiguity of sense. Most of its use is in the context of Unicodification, where it refers to encodings that fail to cover all Unicode code points, or, more generally, using a somewhat different character repertoire: several code points representing one Unicode character,[7] or versa (see e.g. code page 437). Some sources refer to an encoding as legacy only because it preceded Unicode.[8] All Windows code pages are usually referred to as legacy, both because they antedate Unicode and because they are unable to represent all 221 possible Unicode code points.

Character encoding translation

As a result of having many character encoding methods in use (and the need for backward compatibility with archived data), many computer programs have been developed to translate data between encoding schemes as a form of data transcoding. Some of these are cited below.

Cross-platform:

  • Web browsers – most modern web browsers feature automatic character encoding detection. On Firefox 3, for example, see the View/Character Encoding submenu.
  • iconv – program and standardized API to convert encodings
  • luit – program that converts encoding of input and output to programs running interactively
  • convert_encoding.py – Python based utility to convert text files between arbitrary encodings and line endings.[9]
  • decodeh.py – algorithm and module to heuristically guess the encoding of a string.[10]
  • International Components for Unicode – A set of C and Java libraries to perform charset conversion. uconv can be used from ICU4C.
  • chardet – This is a translation of the Mozilla automatic-encoding-detection code into the Python computer language.
  • The newer versions of the Unix file command attempt to do a basic detection of character encoding (also available on Cygwin).
  • charsetC++ template library with simple interface to convert between C++/user-defined streams. charset defined many character-sets and allows you to use Unicode formats with support of endianness.

Unix-like:

  • cmv – simple tool for transcoding filenames.[11]
  • convmv – convert a filename from one encoding to another.[12]
  • cstocs – convert file contents from one encoding to another for the Czech and Slovak languages.
  • enca – analyzes encodings for given text files.[13]
  • recode – convert file contents from one encoding to another[14]
  • utrac – convert file contents from one encoding to another.[15]

Windows:

  • Encoding.Convert – .NET API[16]
  • MultiByteToWideChar/WideCharToMultiByte – Convert from ANSI to Unicode & Unicode to ANSI[17]
  • cscvt – character set conversion tool[18]
  • enca – analyzes encodings for given text files.[19]

See also

Common character encodings

References

  1. ^ Definition from The Tech Terms Dictionary
  2. ^ Tom Henderson (17 April 2014). "Ancient Computer Character Code Tables – and Why They're Still Relevant". Smartbear. Retrieved 29 April 2014.
  3. ^ Tom Jennings (1 March 2010). "An annotated history of some character codes". Retrieved 1 November 2018.
  4. ^ "The Java Tutorials - Terminology". Oracle. Retrieved 25 March 2018.
  5. ^ a b c d e "Unicode Technical Report #17: Unicode Character Encoding Model". 11 November 2008. Retrieved 8 August 2009.
  6. ^ Shawn Steele (15 March 2005). "What's the difference between an Encoding, Code Page, Character Set and Unicode?". MSDN.
  7. ^ "Processing database information using Unicode, a case study" Archived 17 June 2006 at the Wayback Machine
  8. ^ Constable, Peter (13 June 2001). "Character set encoding basics". Implementing Writing Systems: An introduction. SIL International. Retrieved 19 March 2010.
  9. ^ convert_encoding.py
  10. ^ Decodeh – heuristically decode a string or text file Archived 8 January 2008 at the Wayback Machine
  11. ^ CharsetMove – Simple Tool for Transcoding Filenames
  12. ^ Convmv – converts filenames from one encoding to another
  13. ^ Extremely Naive Charset Analyser
  14. ^ Recode – GNU project – Free Software Foundation (FSF)
  15. ^ Utrac Homepage
  16. ^ Microsoft .NET Framework Class Library – Encoding.Convert Method
  17. ^ MultiByteToWideChar/WideCharToMultiByte – Convert from ANSI to Unicode & Unicode to ANSI
  18. ^ Kalytta's Character Set Converter
  19. ^ Extremely Naive Charset Analyser

Further reading

External links

ASCII

ASCII ( (listen) ASS-kee), abbreviated from American Standard Code for Information Interchange, is a character encoding standard for electronic communication. ASCII codes represent text in computers, telecommunications equipment, and other devices. Most modern character-encoding schemes are based on ASCII, although they support many additional characters.

ASCII is the traditional name for the encoding system; the Internet Assigned Numbers Authority (IANA) prefers the updated name US-ASCII, which clarifies that this system was developed in the US and based on the typographical symbols predominantly in use there.ASCII is one of the IEEE milestones.

BCD (character encoding)

BCD ("Binary-Coded Decimal"), also called alphanumeric BCD, alphameric BCD, BCD Interchange Code, or BCDIC, is a family of representations of numerals, uppercase Latin letters, and some special and control characters as six-bit character codes.

Unlike later encodings such as ASCII, BCD codes were not standardized. Different computer manufacturers, and even different product lines from the same manufacturer, often had their own variants, and sometimes included unique characters. Other six-bit encodings with completely different mappings, such as some FIELDATA variants or Transcode, are sometimes incorrectly termed BCD.

Many variants of BCD encode the characters '0' through '9' as the corresponding binary values.

Character (computing)

In computer and machine-based telecommunications terminology, a character is a unit of information that roughly corresponds to a grapheme, grapheme-like unit, or symbol, such as in an alphabet or syllabary in the written form of a natural language.Examples of characters include letters, numerical digits, common punctuation marks (such as "." or "-"), and whitespace. The concept also includes control characters, which do not correspond to symbols in a particular natural language, but rather to other bits of information used to process text in one or more languages. Examples of control characters include carriage return or tab, as well as instructions to printers or other devices that display or otherwise process text.

Characters are typically combined into strings.

Character encodings in HTML

HTML (Hypertext Markup Language) has been in use since 1991, but HTML 4.0 (December 1997) was the first standardized version where international characters were given reasonably complete treatment. When an HTML document includes special characters outside the range of seven-bit ASCII, two goals are worth considering: the information's integrity, and universal browser display.

Code point

In character encoding terminology, a code point or code position is any of the numerical values that make up the code space. Many code points represent single characters but they can also have other meanings, such as for formatting.For example, the character encoding scheme ASCII comprises 128 code points in the range 0hex to 7Fhex, Extended ASCII comprises 256 code points in the range 0hex to FFhex, and Unicode comprises 1,114,112 code points in the range 0hex to 10FFFFhex. The Unicode code space is divided into seventeen planes (the basic multilingual plane, and 16 supplementary planes), each with 65,536 (= 216) code points. Thus the total size of the Unicode code space is 17 × 65,536 = 1,114,112.

Cork encoding

The Cork (also known as T1 or EC) encoding is a character encoding used for encoding glyphs in fonts. It is named after the city of Cork in Ireland, where during a TeX Users Group (TUG) conference in 1990 a new encoding was introduced for LaTeX. It contains 256 characters supporting most west and east-European languages with the Latin alphabet.

DEC Radix-50

RADIX-50, commonly called Rad-50, RAD50 or DEC Squoze, is an uppercase only character encoding created by Digital Equipment Corporation for use on their DECsystem, PDP, and VAX computers. RADIX-50's 40-character repertoire (050 in octal) can encode six characters plus four additional bits into one 36-bit word (PDP-6, PDP-10/DECsystem-10, DECSYSTEM-20); three characters plus two additional bits into one 18-bit word (PDP-9, PDP-15); or three characters into one 16-bit word (PDP-11, VAX).

The actual encoding differed between the 36-bit and 16-bit systems.

GBK (character encoding)

GBK is an extension of the GB2312 character set for simplified Chinese characters, used in the People's Republic of China. It includes all unified CJK characters found in GB13000.1-93, i.e. ISO/IEC 10646:1993, or Unicode 1.1. Since its initial release in 1993, GBK has been extended by Microsoft in Code page 936/1386, which was then extended into GBK 1.0. GBK is also the IANA-registered internet name for the Microsoft mapping, which differs from other implementations primarily by the single-byte euro sign at 0x80.

GB abbreviates Guojia Biaozhun, which means national standard in Chinese, while K stands for Extension (扩展 kuòzhǎn). GBK not only extended the old standard GB2312 with Traditional Chinese characters, but also with Chinese characters that were simplified after the establishment of GB2312 in 1981. With the arrival of GBK, certain names with characters formerly unrepresentable, like the 镕 (róng) character in former Chinese Premier Zhu Rongji's name, are now representable. 0.1% of all web pages used GBK in December 2018.

HZ (character encoding)

The HZ character encoding is an encoding of GB2312 that was formerly commonly used in email and USENET postings. It was designed in 1989 by Fung Fung Lee (Chinese: 李楓峰) of Stanford University, and subsequently codified in 1995 into RFC 1843.

The HZ, short for Hanzi (simplified Chinese: 汉字; traditional Chinese: 漢字; literally: 'Chinese Characters'), encoding was invented to facilitate the use of Chinese characters through e-mail, which at that time only allowed 7-bit characters. Therefore, in lieu of standard ISO 2022 escape sequences (as in the case of ISO-2022-JP) or 8-bit characters (as in the case of EUC), the HZ code uses only printable, 7-bit characters to represent Chinese characters.

It was also popular in USENET networks, which in the late 1980s and early 1990s, generally did not allow transmission of 8-bit characters or escape characters.

ISO 5428

ISO 5428:1984, Greek alphabet coded character set for bibliographic information interchange, is an ISO standard for an 8-bit character encoding for the modern Greek language. It contains a set of 73 graphic characters and is available through UNIMARC. In practice it is now superseded by Unicode.

ISO 6438

ISO 6438:1983, Documentation — African coded character set for bibliographic information interchange, is an ISO standard for an 8-bit character encoding for African languages. It has had little use (such as being available through UNIMARC). In practice it is now superseded by Unicode.

Iran System encoding

Iran System encoding was an 8-bit character encoding scheme and was created by Iran System corporation for Persian language support. This encoding was in use in Iran in DOS-based programs and after the introduction of Microsoft code page 1256 this encoding became obsolete. However, some Windows and DOS programs using this encoding are still in use and some Windows fonts with this encoding exist. Now most programs use code page 1256, code page 1259, or Unicode.

List of hexagrams of the I Ching

This is a list of the 64 hexagrams of the I Ching, or Book of Changes, and their Unicode character codes.

This list is in King Wen order. (Cf. other hexagram sequences.)

Mac OS Ogham

Mac OS Ogham is a character encoding for representing Ogham text on Apple Macintosh computers. It is a superset of the Irish Standard I.S. 434:1999 character encoding for Ogham, adding some punctuation characters from Mac OS Roman. It is not an official Mac OS Codepage.

RPL character set

The RPL character set is an 8-bit character set and encoding used by most RPL calculators manufactured by Hewlett-Packard as well as by the HP 82240B thermo printer. It is sometimes referred to simply as "ECMA-94" in documentation, although it is for the most part a superset of ISO 8859-1 / ECMA-94 in terms of printable characters, and it differs from ISO-8859-1 by using displayable characters rather than control characters in the 0x80 to 0x9F range of code points.

Shift JIS

Shift JIS (Shift Japanese Industrial Standards, also SJIS, MIME name Shift_JIS) is a character encoding for the Japanese language, originally developed by a Japanese company called ASCII Corporation in conjunction with Microsoft and standardized as JIS X 0208 Appendix 1. 0.4% of all web pages used Shift JIS in September 2018, a decline from 1.3% in July 2014.

Slate and stylus

The slate and stylus are tools used by blind persons to write text that they can read without assistance. Invented by Charles Barbier as the tool for writing night writing, the slate and stylus allow for a quick, easy, convenient and constant method of making embossed printing for Braille character encoding. Prior methods of making raised printing for the blind required a movable type printing press.

Stanford/ITS character set

Stanford/ITS character set is an extended ASCII character set based on SEASCII with modifications allowing compatibility with 1968 ASCII.

Tamil All Character Encoding

Tamil All Character Encoding (TACE16) is a 16-bit Unicode-based character encoding scheme for Tamil language.

Character encodings
Early telecommunications
ISO/IEC 8859
Bibliographic use
National standards
EUC
ISO/IEC 2022
MacOS code pages("scripts")
DOS code pages
IBM AIX code pages
IBM Apple MacIntoshemulations
IBM Adobe emulations
IBM DEC emulations
IBM HP emulations
Windows code pages
EBCDIC code pages
Platform specific
Unicode / ISO/IEC 10646
TeX typesetting system
Miscellaneous code pages
Related topics

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.