JIS X 0212

JIS X 0212 is a Japanese Industrial Standard defining a coded character set for encoding supplementary characters for use in Japanese. This standard is intended to supplement JIS X 0208 (Code page 952). It is numbered 953 or 5049 as an IBM code page (see below).

It is one of the source standards for Unicode's CJK Unified Ideographs.

JIS X 0212
Language(s)Intended to be used alongside JIS X 0208 for Japanese support. Does not substantially support any language on its own.
StandardJIS X 0212:1990
ClassificationSupplementary charset, ISO 2022, DBCS, CJK encoding
ExtendsJIS X 0208 when used together
Encoding formatsEUC-JP
ISO-2022-JP-1
Succeeded byJIS X 0213

History

In 1990 the Japanese Standards Association (JSA) released a supplementary character set standard: JIS X 0212-1990 Code of the Supplementary Japanese Graphic Character Set for Information Interchange (情報交換用漢字符号-補助漢字 Jōhō Kōkan'yō Kanji Fugō - Hojo Kanji). This standard was intended to build upon the range of characters available in the main JIS X 0208 character set, and to address shortcomings in the coverage of that set.

Features

Ver3benzu-EN
Euler diagram comparing repertoires of JIS X 0208, JIS X 0212, JIS X 0213, Windows-31J, the Microsoft standard repertoire and Unicode.

The standard specified 6,067 characters, comprising:

  • 21 Greek characters with diacritics
  • 26 Eastern European characters with diacritics (mostly Cyrillic)
  • 198 alphabetic characters with diacritics
  • 5,801 kanji

Encodings

The following encodings or encapsulations are used to enable JIS X 0212 characters to be used in files, etc.

  • in EUC-JP characters are represented by three bytes, the first being 0x8F, the following two in the range 0xA1 – 0xFE.
  • in ISO 2022 the sequence "ESC $ ( D" is used to indicate JIS X 0212 characters.

No encapsulation of JIS X 0212 characters in the popular Shift JIS encoding is possible, as Shift JIS does not have sufficient unallocated code space for the characters.

Implementations

JIS X 0212 is called Code page 953 by IBM, which includes vendor extensions.[1][2] The alternative CCSID 5049 excludes these extensions.[3]

As JIS X 0212 characters cannot be encoded in Shift JIS, the coding system which has traditionally dominated Japanese information processing, few practical implementations of the character set have taken place. As mentioned above, it can be encoded in EUC-JP, which is commonly used in Unix/Linux systems, and it is here that most implementations have occurred:

  • in the early 1990s basic "BDF" fonts were compiled for use in the Unix X Window System;
  • an IME conversion file was compiled for the WNN system;
  • the kterm console window application was extended to support it;
  • the Emacs and jstevie editors were extended to support it.

Many WWW browsers such as the Netscape/Mozilla/Firefox family, Opera, etc. and related applications such as Mozilla Thunderbird support the display of JIS X 0212 characters in EUC-JP encoding, however Internet Explorer has no support for JIS X 0212 characters. Modern terminal emulation packages, such as the GNOME Terminal also support JIS X 0212 characters.

Applications which support JIS X 0212 in the EUC coding include:

  • the xjdic dictionary program for Unix/Linux;
  • the WWWJDIC Japanese dictionary server (however as Internet Explorer does not support the JIS X 0212 extensions in EUC, this server sends bit-mapped graphics for these characters when set in EUC-JP mode.)

JIS X 0212 and Unicode

The kanji in JIS X 0212 were taken as one of the sources for the Han unification which led to the unified set of CJK characters in the initial ISO 10646/Unicode standard. All the 5,801 kanji were incorporated.

The future

Apart from the applications mentioned above, the JIS X 0212 standard is effectively dead. 2,743 kanji from it were included in the later JIS X 0213 standard. In the longer term, its contribution will probably be seen to be the 5,801 kanji which were incorporated in Unicode.

See also

References

  • JIS X 0212-1990 情報交換用漢字符号―補助漢字, 日本規格協会, 東京 (1990年10月1日制定).(the Japanese standards document)
  • Understanding Japanese Information Processing, Ken Lunde, O'Reilly & Assoc. 1993
  • CJKV Information Processing, Ken Lunde, O'Reilly & Assoc. 1999, 2008.
  1. ^ "Code page (CPGID) 953". IBM Globalization. IBM.
  2. ^ "CCSID 953". IBM Globalization. IBM.
  3. ^ "CCSID 5049". IBM Globalization. IBM.

External links

Charset detection

Character encoding detection, charset detection, or code page detection is the process of heuristically guessing the character encoding of a series of bytes that represent text. The technique is recognised to be unreliable and is only used when specific metadata, such as a HTTP Content-Type: header is either not available, or is assumed to be untrustworthy.

This algorithm usually involves statistical analysis of byte patterns, like frequency distribution of trigraphs of various languages encoded in each code page that will be detected; such statistical analysis can also be used to perform language detection. This process is not foolproof because it depends on statistical data.

In general, incorrect charset detection leads to mojibake.

One of the few cases where charset detection works reliably is detecting UTF-8. This is due to the large percentage of invalid byte sequences in UTF-8, so that text in any other encoding that uses bytes with the high bit set is extremely unlikely to pass a UTF-8 validity test. However, badly written charset detection routines do not run the reliable UTF-8 test first, and may decide that UTF-8 is some other encoding. For example, it was common that web sites in UTF-8 containing the name of the German city München were shown as München.

UTF-16 is fairly reliable to detect due to the high number of newlines (U+000A) and spaces (U+0020) that should be found when dividing the data into 16-bit words, and the fact that few encodings use 16-bit words. This process is not foolproof; for example, some versions of the Windows operating system would mis-detect the phrase "Bush hid the facts" (without a newline) in ASCII as Chinese UTF-16LE.

Charset detection is particularly unreliable in Europe, in an environment of mixed ISO-8859 encodings. These are closely related eight-bit encodings that share an overlap in their lower half with ASCII. There is no technical way to tell these encodings apart and recognising them relies on identifying language features, such as letter frequencies or spellings.

Due to the unreliability of heuristic detection, it is better to properly label datasets with the correct encoding. HTML documents served across the web by HTTP should have their encoding stated out-of-band using the Content-Type: header.

Content-Type: text/html;charset=UTF-8

An isolated HTML document, such as one being edited as a file on disk, may imply such a header by a meta tag within the file:

or with a new meta type in HTML5

If the document is Unicode, then some UTF encodings explicitly label the document with an embedded initial byte order mark (BOM).

Code page 1287

Code page 1287, also known as CP1287, DEC Greek (8-bit) and EL8DEC, is one of the code pages implemented for the VT220 terminals. It supports the Greek language.

Extended Unix Code

Extended Unix Code (EUC) is a multibyte character encoding system used primarily for Japanese, Korean, and simplified Chinese.

The structure of EUC is based on the ISO-2022 standard, which specifies a way to represent character sets containing a maximum of 94 characters, or 8836 (942) characters, or 830584 (943) characters, as sequences of 7-bit codes. Only ISO-2022 compliant character sets can have EUC forms. Up to four coded character sets (referred to as G0, G1, G2, and G3 or as code sets 0, 1, 2, and 3) can be represented with the EUC scheme.

G0 is almost always an ISO-646 compliant coded character set such as US-ASCII, ISO 646:KR (KS X 1003) or ISO 646:JP (the lower half of JIS X 0201) that is invoked on GL (i.e. with the most significant bit cleared). An exception from US-ASCII is that 0x5C (backslash in US-ASCII) is often used to represent a Yen sign in EUC-JP (see below) and a Won sign in EUC-KR.

To get the EUC form of an ISO-2022 character, the most significant bit of each 7-bit byte of the original ISO 2022 codes is set (by adding 128 to each of these original 7-bit codes); this allows software to easily distinguish whether a particular byte in a character string belongs to the ISO-646 code or the ISO-2022 (EUC) code.

The most commonly used EUC codes are variable-width encodings with a character belonging to G0 (ISO-646 compliant coded character set) taking one byte and a character belonging to G1 (taken by a 94x94 coded character set) represented in two bytes. The EUC-CN form of GB2312 and EUC-KR are examples of such two-byte EUC codes. EUC-JP includes characters represented by up to three bytes whereas a single character in EUC-TW can take up to four bytes.

Modern applications are more likely to use UTF-8, which supports all of the glyphs of the EUC codes, and more, and is generally more portable with fewer vendor deviations and errors.

Extended shinjitai

Extended shinjitai (拡張新字体, kakuchō shinjitai, lit. "extended new character form") is the extension of the shinjitai (officially simplified kanji). They are the simplified versions of some of the hyōgaiji (表外字, kanji not included in the jōyō kanji list). They are unofficial characters; the official forms of these hyōgaiji are still kyūjitai (traditional characters).

ISO/IEC 2022

ISO/IEC 2022 Information technology—Character code structure and extension techniques, is an ISO standard (equivalent to the ECMA standard ECMA-35) specifying

a technique for including multiple character sets in a single character encoding system, and

a technique for representing these character sets in both 7 and 8 bit systems using the same encoding.Many of the character sets included as ISO/IEC 2022 encodings are 'double byte' encodings where two bytes correspond to a single character. This makes ISO-2022 a variable width encoding. But a specific implementation does not have to implement all of the standard; the conformance level and the supported character sets are defined by the implementation.

ISO/IEC 6937

ISO/IEC 6937:2001, Information technology — Coded graphic character set for text communication — Latin alphabet, is a multibyte extension of ASCII, or rather of ISO/IEC 646-IRV. It was developed in common with ITU-T (then CCITT) for telematic services under the name of T.51, and first became an ISO standard in 1983. Certain byte codes are used as lead bytes for letters with diacritics (accents). The value of the lead byte often indicates which diacritic that the letter has, and the follow byte then has the ASCII-value for the letter that the diacritic is on. Only certain combinations of lead byte and follow byte are allowed, and there are some exceptions to the lead byte interpretation for some follow bytes. However, there are no combining characters at all are encoded in ISO/IEC 6937. But one can represent some free-standing diacritics, often by letting the follow byte have the code for ASCII space.

ISO/IEC 6937's architects were Hugh McGregor Ross, Peter Fenwick, Bernard Marti and Loek Zeckendorf.

ISO6937/2 defines 327 characters found in modern European languages using the Latin alphabet. Non-Latin European characters, such as Cyrillic and Greek, are not included in the standard. Also, some diacritics used with the Latin alphabet like the Romanian comma are not included, using cedilla instead as no distinction between cedilla and comma below was made at the time.

IANA has registered the charset names ISO_6937-2-25 and ISO_6937-2-add for two (older) versions of this standard (plus control codes). But in practice this character encoding is unused on the Internet.

The ISO/IEC 2022 escape sequence to specify the right-hand side of the ISO/IEC 6937 character set is ESC - R (hex 1B 2D 52).

ISO/IEC 8859-12

ISO/IEC 8859-12 would have been part 12 of the ISO/IEC 8859 character encoding standard series.

ISO 8859-12 was originally proposed to support the Celtic languages. ISO 8859-12 was later slated for Latin/Devanagari, but this was abandoned in 1997, during the 12th meeting of ISO/IEC JTC 1/SC 2/WG 3 in Iraklion-Crete, Greece, 4 to 7 July 1997. The Celtic proposal was changed to ISO 8859-14.

ISO/IEC 8859-16

ISO/IEC 8859-16:2001, Information technology — 8-bit single-byte coded graphic character sets — Part 16: Latin alphabet No. 10, is part of the ISO/IEC 8859 series of ASCII-based standard character encodings, first edition published in 2001. It is informally referred to as Latin-10 or South-Eastern European. It was designed to cover Albanian, Croatian, Hungarian, Polish, Romanian, Serbian and Slovenian, but also French, German, Italian and Irish Gaelic (new orthography).

ISO-8859-16 is the IANA preferred charset name for this standard when supplemented with the C0 and C1 control codes from ISO/IEC 6429.

Microsoft has assigned code page 28606 a.k.a. Windows-28606 to ISO-8859-16.

ISO/IEC 8859-3

ISO/IEC 8859-3:1999, Information technology — 8-bit single-byte coded graphic character sets — Part 3: Latin alphabet No. 3, is part of the ISO/IEC 8859 series of ASCII-based standard character encodings, first edition published in 1988. It is informally referred to as Latin-3 or South European. It was designed to cover Turkish, Maltese and Esperanto, though the introduction of ISO/IEC 8859-9 superseded it for Turkish. The encoding remains popular with users of Esperanto, though use is waning as application support for Unicode becomes more common.

ISO-8859-3 is the IANA preferred charset name for this standard when supplemented with the C0 and C1 control codes from ISO/IEC 6429. Microsoft has assigned code page 28593 a.k.a. Windows-28593 to ISO-8859-3 in Windows. IBM has assigned code page 913 to ISO 8859-3.

ISO/IEC 8859-9

ISO/IEC 8859-9:1999, Information technology — 8-bit single-byte coded graphic character sets — Part 9: Latin alphabet No. 5, is part of the ISO/IEC 8859 series of ASCII-based standard character encodings, first edition published in 1989. It is informally referred to as Latin-5 or Turkish. It was designed to cover the Turkish language, designed as being of more use than the ISO/IEC 8859-3 encoding. It is identical to ISO/IEC 8859-1 except for these six replacements of Icelandic characters with characters unique to the Turkish alphabet:

ISO-8859-9 is the IANA preferred charset name for this standard when supplemented with the C0 and C1 control codes from ISO/IEC 6429. In modern applications Unicode and UTF-8 are preferred. 0.1% of all web pages use ISO-8859-9 in February 2016.Microsoft has assigned code page 28599 a.k.a. Windows-28599 to ISO-8859-9 in Windows. IBM has assigned Code page 920 to ISO-8859-9.

JIS X 0208

JIS X 0208 is a 2-byte character set specified as a Japanese Industrial Standard, containing 6879 graphic characters suitable for writing text, place names, personal names, and so forth in the Japanese language. The official title of the current standard is 7-bit and 8-bit double byte coded KANJI sets for information interchange (7ビット及び8ビットの2バイト情報交換用符号化漢字集合, Nana-Bitto Oyobi Hachi-Bitto no Ni-Baito Jōhō Kōkan'yō Fugōka Kanji Shūgō). It was originally established as JIS C 6226 in 1978, and has been revised in 1983, 1990, and 1997. It is also called Code page 952 by IBM. The 1978 version is also called Code page 955 by IBM.

JIS X 0213

JIS X 0213 is a Japanese Industrial Standard defining coded character sets for encoding the characters used in Japan. This standard extends JIS X 0208. The first version was published in 2000 and revised in 2004 (JIS2004) and 2012. As well as adding a number of special characters, characters with diacritic marks, etc., it included an additional 3,625 kanji. The full name of the standard is 7-bit and 8-bit double byte coded extended KANJI sets for information interchange (7ビット及び8ビットの2バイト情報交換用符号化拡張漢字集合, Nana-Bitto Oyobi Hachi-Bitto no Ni-Baito Jōhō Kōkan'yō Fugōka Kakuchō Kanji Shūgō).

JIS X 0213 has two "planes" (94×94 character tables). Plane 1 is a superset of JIS X 0208 containing kanji sets level 1 to 3 and non-kanji characters such as Hiragana, Katakana (including letters used to write the Ainu language), Latin, Greek and Cyrillic alphabets, digits, symbols and so on. Plane 2 contains only level 4 kanji set. Total number of the defined characters is 11,233. Each character is capable of being encoded in two bytes.

This standard largely replaced the rarely used JIS X 0212-1990 "supplementary" standard, which included 5,801 kanji and 266 non-kanji. Of the additional 3,695 kanji in JIS X 0213, all but 952 were already in JIS X 0212.

JIS X 0213 defines several 7-bit and 8-bit encodings including EUC-JIS-2004, ISO-2022-JP-2004 and Shift JIS-2004. Also, it defines the mapping from each of these encodings to ISO/IEC 10646 (Unicode) for each character.

Unicode version 3.2 incorporated all characters of JIS X 0213 except for the characters that could be represented using combining characters. Because about 300 kanji are in Unicode Plane 2, Unicode implementations supporting only the Basic Multilingual Plane cannot handle all of the JIS X 0213 characters. This is not an issue for most applications, however.

The 2004 edition of JIS X 0213 changed the recommended renderings of 168 kanji.

JIS encoding

In computing, JIS encoding refers to several Japanese Industrial Standards for encoding the Japanese language. Strictly speaking, the term means either:

A set of standard coded character sets for Japanese, notably:

JIS X 0201, the Japanese version of ISO 646 (ASCII) containing the base 7-bit ASCII characters (with some modifications) and 64 half-width katakana characters.

JIS X 0208, the most common kanji character set containing 6,879 characters, including 6355 kanji and 524 other characters (one 94 by 94 plane)

JIS X 0212, an supplement for JIS X 0208 which adds 5801 kanji, totalling 12156 kanji (a second 94 by 94 plane)

JIS X 0213, which extends JIS X 0208 (two planes)

JIS X 0202 (also known as ISO-2022-JP), a set of encoding mechanisms for sending JIS character data over transmission mediums that only support 7-bit data.In practice, "JIS encoding" usually refers to JIS X 0208 character data encoded with JIS X 0202. For instance, the IANA uses the JIS_Encoding label to refer to JIS X 0202, and the ISO-2022-JP label to refer to the profile thereof defined by RFC 1468.Other encoding mechanisms for JIS characters include the Shift JIS encoding and EUC-JP. Shift JIS adds the kanji, full-width hiragana and full-width katakana from JIS X 0208 to JIS X 0201 in a backward compatible way. Shift JIS is perhaps the most widely used encoding in Japan, as the compatibility with the single-byte JIS X 0201 character set made it possible for electronic equipment manufacturers (such as cash register manufacturers) to offer an upgrade from older cheaper equipment that was not capable of displaying kanji to newer equipment while retaining character-set compatibility.

EUC-JP is used on UNIX systems, where the JIS encodings are incompatible with POSIX standards.

A more recent alternative to JIS coded characters is Unicode (UCS coded characters), particularly in the UTF-8 encoding mechanism.

Japanese Industrial Standards

Japanese Industrial Standards (JIS) (日本工業規格, Nihon Kōgyō Kikaku) are the standards used for industrial activities in Japan, coordinated by the Japanese Industrial Standards Committee (JISC) and published by the Japanese Standards Association (JSA). The JISC is composed of many nationwide committees and plays a vital role in standardizing activities across Japan.

Kanji

Kanji (漢字; [kã̠ɴʑi] listen) are the adopted logographic Chinese characters that are used in the Japanese writing system. They are used alongside the Japanese syllabic scripts hiragana and katakana. The Japanese term kanji for the Chinese characters literally means "Han characters". It is written with the same characters in the Chinese language to refer to the character writing system, hanzi (漢字).

Kochi font

Kochi font (東風フォント) was a font development project to build free replacements of proprietary fonts such as MS Gothic or MS Mincho, developed by Yasuyuki Furukawa (古川 泰之). The project consisted of Kochi Gothic and Kochi Mincho fonts. It was released in the public domain.

PostScript fonts

PostScript fonts are font files encoded in outline font specifications developed by Adobe Systems for professional digital typesetting. This system uses PostScript file format to encode font information.

"PostScript fonts" may also separately be used to refer to a basic set of fonts included as standards in the PostScript system, such as Times, Helvetica and Avant Garde.

Source Han Serif

Source Han Serif, also known as Noto Serif CJK is a serif Song/Ming typeface created by Adobe and Google.

Early telecommunications
ISO/IEC 8859
Bibliographic use
National standards
EUC
ISO/IEC 2022
MacOS code pages("scripts")
DOS code pages
IBM AIX code pages
IBM Apple MacIntoshemulations
IBM Adobe emulations
IBM DEC emulations
IBM HP emulations
Windows code pages
EBCDIC code pages
Platform specific
Unicode / ISO/IEC 10646
TeX typesetting system
Miscellaneous code pages
Related topics

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.