UTF-16

UTF-16 (16-bit Unicode Transformation Format) is a character encoding capable of encoding all 1,112,064 valid code points of Unicode. The encoding is variable-length, as code points are encoded with one or two 16-bit code units (also see comparison of Unicode encodings for a comparison of UTF-8, -16 & -32).

UTF-16 arose from an earlier fixed-width 16-bit encoding known as UCS-2 (for 2-byte Universal Character Set) once it became clear that more than 216 code points were needed.[1]

UTF-16 is used internally by systems such as Windows and Java and by JavaScript, and often for plain text and for word-processing data files on Windows. It is rarely used for files on Unix/Linux or macOS. It never gained popularity on the web, where UTF-8 is dominant (and considered "the mandatory encoding for all [text]" by WHATWG[2]). UTF-16 is used by under 0.01% of web pages themselves.[3] WHATWG recommends that for security reasons browser apps should not use UTF-16.[2]

UTF-16
Unifont Full Map
Chart of the Basic Multilingual Plane as UCS-2 (click to enlarge). Rows shown in solid gray (D8–DF) are used as surrogate halves in UTF-16.
Language(s)International
StandardUnicode Standard
ClassificationUnicode Transformation Format, variable-width encoding
ExtendsUCS-2
Transforms / EncodesISO 10646 (Unicode)

History

In the late 1980s, work began on developing a uniform encoding for a "Universal Character Set" (UCS) that would replace earlier language-specific encodings with one coordinated system. The goal was to include all required characters from most of the world's languages, as well as symbols from technical domains such as science, mathematics, and music. The original idea was to replace the typical 256-character encodings, which required 1 byte per character, with an encoding using 65,536 (216) values, which would require 2 bytes per character. Two groups worked on this in parallel, ISO/IEC JTC 1/SC 2 and the Unicode Consortium, the latter representing mostly manufacturers of computing equipment. The two groups attempted to synchronize their character assignments so that the developing encodings would be mutually compatible. The early 2-byte encoding was usually called "Unicode", but is now called "UCS-2". UCS-2 differs from UTF-16 by being a constant length encoding[4] and only capable of encoding characters of BMP.

Early in this process it became increasingly clear that 216 characters would not suffice,[1] and IEEE introduced a larger 31-bit space and an encoding (UCS-4) that would require 4 bytes per character. This was resisted by the Unicode Consortium, both because 4 bytes per character wasted a lot of disk space and memory, and because some manufacturers were already heavily invested in 2-byte-per-character technology. The UTF-16 encoding scheme was developed as a compromise to resolve this impasse in version 2.0 of the Unicode standard in July 1996[5] and is fully specified in RFC 2781 published in 2000 by the IETF.[6][7]

In UTF-16, code points greater or equal to 216 are encoded using two 16-bit code units. The standards organizations chose the largest block available of un-allocated 16-bit code points to use as these code units. Unlike UTF-8 they did not provide a means to encode these code points.

UTF-16 is specified in the latest versions of both the international standard ISO/IEC 10646 and the Unicode Standard. "UCS-2 should now be considered obsolete. It no longer refers to an encoding form in either 10646 or the Unicode Standard."[8] There are no plans to extend UTF-16 to support a higher number of code points, or the codes replaced by surrogates, as allocating code points for this would violate the Unicode Stability Policy with respect to general category or surrogate code points.[9] An example idea would be to allocate another BMP value to prefix a triple of low,low,high surrogates (the order swapped so that it cannot match a surrogate pair in searches), allowing 230 more code points to be encoded, but changing the purpose of a code point is disallowed (using no prefix is also not allowed as two of these characters next to each other would match a surrogate pair).

Description

U+0000 to U+D7FF and U+E000 to U+FFFF

Both UTF-16 and UCS-2 encode code points in this range as single 16-bit code units that are numerically equal to the corresponding code points. These code points in the Basic Multilingual Plane (BMP) are the only code points that can be represented in UCS-2. As of Unicode 9.0 some modern non-Latin Asian, Middle-Eastern and African scripts fall outside this range, as do most emoji characters.

U+010000 to U+10FFFF

Code points from the other planes (called Supplementary Planes) are encoded as two 16-bit code units called a surrogate pair, by the following scheme:

UTF-16 decoder
Low
High
DC00 DC01    …    DFFF
D800 010000 010001 0103FF
D801 010400 010401 0107FF
  ⋮
DBFF 10FC00 10FC01 10FFFF
  • 0x10000 is subtracted from the code point, leaving a 20-bit number in the range 0x00000–0xFFFFF.
  • The high ten bits (in the range 0x000–0x3FF) are added to 0xD800 to give the first 16-bit code unit or high surrogate, which will be in the range 0xD800–0xDBFF.
  • The low ten bits (also in the range 0x000–0x3FF) are added to 0xDC00 to give the second 16-bit code unit or low surrogate, which will be in the range 0xDC00–0xDFFF.

The high surrogate and low surrogate are also known as "leading" and "trailing" surrogates, respectively, analogous to the leading and trailing bytes of UTF-8.[10]

Since the ranges for the high surrogates (0xD800–0xDBFF), low surrogates (0xDC00–0xDFFF), and valid BMP characters (0x0000–0xD7FF, 0xE000–0xFFFF) are disjoint, it is not possible for a surrogate to match a BMP character, or for two adjacent code units to look like a legal surrogate pair. This simplifies searches a great deal. It also means that UTF-16 is self-synchronizing on 16-bit words: whether a code unit starts a character can be determined without examining earlier code units (i.e. the type of code unit can be determined by the ranges of values in which it falls). UTF-8 shares these advantages, but many earlier multi-byte encoding schemes (such as Shift JIS and other Asian multi-byte encodings) did not allow unambiguous searching and could only be synchronized by re-parsing from the start of the string (UTF-16 is not self-synchronizing if one byte is lost or if traversal starts at a random byte).

Because the most commonly used characters are all in the BMP, handling of surrogate pairs is often not thoroughly tested. This leads to persistent bugs and potential security holes, even in popular and well-reviewed application software (e.g. CVE-2008-2938, CVE-2012-2135).

The Supplementary Planes contain emoji, historic scripts, less used symbols, less used Chinese ideographs, etc. Since the encoding of Supplementary Planes contains 20 significant bits (10 of 16 bits in each of the high and low surrogates), 220 code points can be encoded, divided into 16 planes of 216 code points each. Including the separately-handled Basic Multilingual Plane, there are a total of 17 planes.

U+D800 to U+DFFF

The Unicode standard permanently reserves these code point values for UTF-16 encoding of the high and low surrogates, and they will never be assigned a character, so there should be no reason to encode them. The official Unicode standard says that no UTF forms, including UTF-16, can encode these code points.

However UCS-2, UTF-8, and UTF-32 can encode these code points in trivial and obvious ways, and large amounts of software does so even though the standard states that such arrangements should be treated as encoding errors.

It is possible to unambiguously encode an unpaired surrogate (a high surrogate code point not followed by a low one, or a low one not preceded by a high one) in UTF-16 by using a code unit equal to the code point. The majority of UTF-16 encoder and decoder implementations do this then when translating between encodings. Windows allows unpaired surrogates in filenames and other places, which generally means they have to be supported by software in spite of their exclusion from the Unicode standard.

Examples

To encode U+10437 (𐐷) to UTF-16:

  • Subtract 0x10000 from the code point, leaving 0x0437.
  • For the high surrogate, shift right by 10 (divide by 0x400), then add 0xD800, resulting in 0x0001 + 0xD800 = 0xD801.
  • For the low surrogate, take the low 10 bits (remainder of dividing by 0x400), then add 0xDC00, resulting in 0x0037 + 0xDC00 = 0xDC37.

To decode U+10437 (𐐷) from UTF-16:

  • Take the high surrogate (0xD801) and subtract 0xD800, then multiply by 0x400, resulting in 0x0001 × 0x400 = 0x0400.
  • Take the low surrogate (0xDC37) and subtract 0xDC00, resulting in 0x37.
  • Add these two results together (0x0437), and finally add 0x10000 to get the final decoded UTF-32 code point, 0x10437.

The following table summarizes this conversion, as well as others. The colors indicate how bits from the code point are distributed among the UTF-16 bytes. Additional bits added by the UTF-16 encoding process are shown in black.

Character Binary code point Binary UTF-16 UTF-16 hex
code units
UTF-16BE
hex bytes
UTF-16LE
hex bytes
$ U+0024 0000 0000 0010 0100 0000 0000 0010 0100 0024 00 24 24 00
U+20AC 0010 0000 1010 1100 0010 0000 1010 1100 20AC 20 AC AC 20
𐐷 U+10437 0001 0000 0100 0011 0111 1101 1000 0000 0001 1101 1100 0011 0111 D801 DC37 D8 01 DC 37 01 D8 37 DC
𤭢 U+24B62 0010 0100 1011 0110 0010 1101 1000 0101 0010 1101 1111 0110 0010 D852 DF62 D8 52 DF 62 52 D8 62 DF

Byte order encoding schemes

UTF-16 and UCS-2 produce a sequence of 16-bit code units. Since most communication and storage protocols are defined for bytes, and each unit thus takes two 8-bit bytes, the order of the bytes may depend on the endianness (byte order) of the computer architecture.

To assist in recognizing the byte order of code units, UTF-16 allows a Byte Order Mark (BOM), a code point with the value U+FEFF, to precede the first actual coded value.[nb 1] (U+FEFF is the invisible zero-width non-breaking space/ZWNBSP character.)[nb 2] If the endian architecture of the decoder matches that of the encoder, the decoder detects the 0xFEFF value, but an opposite-endian decoder interprets the BOM as the non-character value U+FFFE reserved for this purpose. This incorrect result provides a hint to perform byte-swapping for the remaining values.

If the BOM is missing, RFC 2781 says that big-endian encoding should be assumed. In practice, due to Windows using little-endian order by default, many applications similarly assume little-endian encoding by default. It is also reliable to detect endianess by looking for null bytes, on the assumption that characters less than U+0100 are very common. If more even bytes (starting at 0) are null, then it is big-endian.

The standard also allows the byte order to be stated explicitly by specifying UTF-16BE or UTF-16LE as the encoding type. When the byte order is specified explicitly this way, a BOM is specifically not supposed to be prepended to the text, and a U+FEFF at the beginning should be handled as a ZWNBSP character. Most applications ignore a BOM in all cases despite this rule.

For Internet protocols, IANA has approved "UTF-16", "UTF-16BE", and "UTF-16LE" as the names for these encodings (the names are case insensitive). The aliases UTF_16 or UTF16 may be meaningful in some programming languages or software applications, but they are not standard names in Internet protocols.

Similar designations, UCS-2BE and UCS-2LE, are used to show versions of UCS-2.

Usage

UTF-16 is used for text in the OS API of all currently supported versions of Microsoft Windows (and including at least all since Windows CE/2000/XP/2003/Vista/7[11]) including Windows 10 (while since insider build 17035 and the April 2018 update, it has improved UTF-8 support in addition to UTF-16; see Unicode in Microsoft Windows#UTF-8). Older Windows NT systems (prior to Windows 2000) only support UCS-2.[12] In Windows XP, no code point above U+FFFF is included in any font delivered with Windows for European languages.[13][14] Files and network data tend to be a mix of UTF-16, UTF-8, and legacy byte encodings.

The IBM i operating system designates CCSID (code page) 13488 for UCS-2 encoding and CCSID 1200 for UTF-16 encoding, though the system treats them both as UTF-16.[15].

UTF-16 is used by the Qualcomm BREW operating systems; the .NET environments; and the Qt cross-platform graphical widget toolkit.

Symbian OS used in Nokia S60 handsets and Sony Ericsson UIQ handsets uses UCS-2. iPhone handsets use UTF-16 for Short Message Service instead of UCS-2 described in the 3GPP TS 23.038 (GSM) and IS-637 (CDMA) standards.[16]

The Joliet file system, used in CD-ROM media, encodes file names using UCS-2BE (up to sixty-four Unicode characters per file name).

The Python language environment officially only uses UCS-2 internally since version 2.0, but the UTF-8 decoder to "Unicode" produces correct UTF-16. Since Python 2.2, "wide" builds of Unicode are supported which use UTF-32 instead;[17] these are primarily used on Linux. Python 3.3 no longer ever uses UTF-16, instead an encoding that gives the most compact representation for the given string is chosen from ASCII/Latin-1, UCS-2, and UTF-32.[18]

Java originally used UCS-2, and added UTF-16 supplementary character support in J2SE 5.0.

JavaScript implementations may use UCS-2 or UTF-16.[19] As of ES2015, string methods and regular expression flags have been added to the language that permit handling strings from an encoding-agnostic perspective.

In many languages, quoted strings need a new syntax for quoting non-BMP characters, as the C-style "\uXXXX" syntax explicitly limits itself to 4 hex digits. The most common (used by C++, C#, D, and several other languages) is to use an upper-case 'U' with 8 hex digits such as "\U0001D11E".[20] In Java 7 regular expressions, ICU, and Perl, the syntax "\x{1D11E}" must be used; similarly, in ECMAScript 2015 (JavaScript), the escape format is "\u{1D11E}". In many other cases (such as Java outside of regular expressions),[21] the only way to get non-BMP characters is to enter the surrogate halves individually, for example: "\uD834\uDD1E" for U+1D11E.

String implementations based on UTF-16 typically return lengths and allow indexing in terms of code units, not code points. Neither code points nor code units correspond to anything an end user might recognise as a “character”; the things users identify as characters may in general consist of a base code point and a sequence of combining characters (or be a sequence of code points of other kind, for example Hangul conjoining jamos) – Unicode refers to this as a grapheme cluster[22] – and as such, applications dealing with Unicode strings, whatever the encoding, have to cope with the fact that they cannot arbitrarily split and combine strings.

UCS-2 is supported by PHP[23] and MySQL.[4]

See also

Notes

  1. ^ UTF-8 encoding produces byte values strictly less than 0xFE, so either byte in the BOM sequence also identifies the encoding as UTF-16 (assuming that UTF-32 is not expected).
  2. ^ Use of U+FEFF as the character ZWNBSP instead of as a BOM has been deprecated in favor of U+2060 (WORD JOINER); see Byte Order Mark (BOM) FAQ at unicode.org. But if an application interprets an initial BOM as a character, the ZWNBSP character is invisible, so the impact is minimal.

References

  1. ^ a b "What is UTF-16?". The Unicode Consortium. Unicode, Inc. Retrieved 29 March 2018.
  2. ^ a b "Encoding Standard". encoding.spec.whatwg.org. Retrieved 2018-10-22. The UTF-8 encoding is the most appropriate encoding for interchange of Unicode, the universal coded character set. Therefore for new protocols and formats, as well as existing formats deployed in new contexts, this specification requires (and defines) the UTF-8 encoding. [..] The problems outlined here go away when exclusively using UTF-8, which is one of the many reasons that is now the mandatory encoding for all things.
  3. ^ "Usage Statistics of UTF-16 for Websites, April 2018". w3techs.com. Retrieved 2018-04-11.
  4. ^ a b "MySQL :: MySQL 5.7 Reference Manual :: 10.1.9.4 The ucs2 Character Set (UCS-2 Unicode Encoding)". dev.mysql.com.
  5. ^ "Questions about encoding forms". Retrieved 2010-11-12.
  6. ^ ISO/IEC 10646:2014 "Information technology – Universal Coded Character Set (UCS)" sections 9 and 10.
  7. ^ The Unicode Standard version 7.0 (2014) section 2.5.
  8. ^ "The Unicode® Standard Version 10.0 – Core Specification. Appendix C Relationship to ISO/IEC 10646" (PDF). Unicode Consortium. section C.2 page 913 (pdf page 10)
  9. ^ "Unicode Character Encoding Stability Policies". unicode.org.
  10. ^ Allen, Julie D.; Anderson, Deborah; Becker, Joe; Cook, Richard, eds. (2014). "3.8 Surrogates" (PDF). The Unicode Standard, Version 7.0—Core Specification. Mountain View: The Unicode Consortium. p. 118. Retrieved 3 November 2014.
  11. ^ Unicode (Windows). Retrieved 2011-03-08 "These functions use UTF-16 (wide character) encoding (…) used for native Unicode encoding on Windows operating systems."
  12. ^ "Description of storing UTF-8 data in SQL Server". microsoft.com. 7 December 2005. Retrieved 2008-02-01.
  13. ^ "Unicode". microsoft.com. Retrieved 2009-07-20.
  14. ^ "Surrogates and Supplementary Characters". microsoft.com. Retrieved 2009-07-20.
  15. ^ "UCS-2 and its relationship to Unicode (UTF-16)". IBM. Retrieved 2019-04-26.
  16. ^ Selph, Chad (2012-11-08). "Adventures in Unicode SMS". Twilio. Retrieved 2015-08-28.
  17. ^ "PEP 261 – Support for "wide" Unicode characters". Python.org. Retrieved 2015-05-29.
  18. ^ "PEP 0393 – Flexible String Representation". Python.org. Retrieved 2015-05-29.
  19. ^ "JavaScript's internal character encoding: UCS-2 or UTF-16? · Mathias Bynens".
  20. ^ "ECMA-334: 9.4.1 Unicode escape sequences". en.csharp-online.net. Archived from the original on 2013-05-01.
  21. ^ "Java SE Specifications". sun.com. Retrieved 2015-05-29.
  22. ^ "Glossary of Unicode Terms". Retrieved 2016-06-21.
  23. ^ "PHP: Supported Character Encodings - Manual". php.net.

External links

Byte order mark

The byte order mark (BOM) is a Unicode character, U+FEFF BYTE ORDER MARK (BOM), whose appearance as a magic number at the start of a text stream can signal several things to a program reading the text:

The byte order, or endianness, of the text stream;

The fact that the text stream's encoding is Unicode, to a high level of confidence;

Which Unicode encoding the text stream is encoded as.BOM use is optional. Its presence interferes with the use of UTF-8 by software that does not expect non-ASCII bytes at the start of a file but that could otherwise handle the text stream.

Unicode can be encoded in units of 8-bit, 16-bit, or 32-bit integers. For the 16- and 32-bit representations, a computer receiving text from arbitrary sources needs to know which byte order the integers are encoded in. The BOM is encoded in the same scheme as the rest of the document and becomes a non-character Unicode code point if its bytes are swapped. Hence, the process accessing the text can examine these first few bytes to determine the endianness, without requiring some contract or metadata outside of the text stream itself. Generally the receiving computer will swap the bytes to its own endianness, if necessary, and would no longer need the BOM for processing.

The byte sequence of the BOM differs per Unicode encoding (including ones outside the Unicode standard such as UTF-7, see table below), and none of the sequences is likely to appear at the start of text streams stored in other encodings. Therefore, placing an encoded BOM at the start of a text stream can indicate that the text is Unicode and identify the encoding scheme used. This use of the BOM character is called a "Unicode signature".

CCSID

A CCSID (coded character set identifier) is a 16-bit number that represents a particular encoding of a specific code page. For example, Unicode is a code page that has several encoding forms, like UTF-8, UTF-16 and UTF-32.

CESU-8

The Compatibility Encoding Scheme for UTF-16: 8-Bit (CESU-8) is a variant of UTF-8 that is described in Unicode Technical Report #26. A Unicode code point from the Basic Multilingual Plane (BMP), i.e. a code point in the range U+0000 to U+FFFF, is encoded in the same way as in UTF-8. A Unicode supplementary character, i.e. a code point in the range U+10000 to U+10FFFF, is first represented as a surrogate pair, like in UTF-16, and then each surrogate code point is encoded in UTF-8. Therefore, CESU-8 needs six bytes (3 bytes per surrogate) for each Unicode supplementary character while UTF-8 needs only four.

The encoding of Unicode supplementary characters works out to 11101101 1010yyyy 10xxxxxx 11101101 1011xxxx 10xxxxxx (yyyy represents the top five bits of the character minus one).

CESU-8 is not an official part of the Unicode Standard, because Unicode Technical Reports are informative documents only. It should be used exclusively for internal processing and never for external data exchange.

Supporting CESU-8 in HTML documents is prohibited by the W3C and WHATWG HTML standards, as it would present a cross-site scripting vulnerability.CESU-8 is similar to Java's Modified UTF-8 but does not have the special encoding of the NUL character (U+0000).

The Oracle database uses CESU-8 for its "UTF8" character set. Standard UTF-8 can be obtained using the character set "AL32UTF8" (since Oracle version 9.0).

Comparison of Unicode encodings

This article compares Unicode encodings. Two situations are considered: 8-bit-clean environments, and environments that forbid use of byte values that have the high bit set. Originally such prohibitions were to allow for links that used only seven data bits, but they remain in the standards and so software must generate messages that comply with the restrictions. Standard Compression Scheme for Unicode and Binary Ordered Compression for Unicode are excluded from the comparison tables because it is difficult to simply quantify their size.

International Components for Unicode

International Components for Unicode (ICU) is an open-source project of mature C/C++ and Java libraries for Unicode support, software internationalization, and software globalization. ICU is widely portable to many operating systems and environments. It gives applications the same results on all platforms and between C, C++, and Java software. The ICU project is sponsored, supported, and used by IBM and many other companies.ICU provides the following services: Unicode text handling, full character properties, and character set conversions; Unicode regular expressions; full Unicode sets; character, word, and line boundaries; language-sensitive collation and searching; normalization, upper and lowercase conversion, and script transliterations; comprehensive locale data and resource bundle architecture via the Common Locale Data Repository (CLDR); multi-calendar and time zones; and rule-based formatting and parsing of dates, times, numbers, currencies, and messages. ICU provided complex text layout service for Arabic, Hebrew, Indic, and Thai historically, but that was deprecated in version 54, and was completely removed in version 58 in favor of HarfBuzz.ICU provides more extensive internationalization facilities than the standard libraries for C and C++. ICU 62 supports Unicode 11.0, and older versions supports Unicode 10.0, but not for older platforms such as Windows XP, Windows Vista, AIX, Solaris, or z/OS.

ICU has historically used UTF-16, and still does only for Java; while for C/C++ UTF-8 is supported, including the correct handling of "illegal UTF-8".

Microsoft Notepad

Notepad is a simple text editor for Microsoft Windows and a basic text-editing program which enables computer users to create documents. It was first released as a mouse-based MS-DOS program in 1983, and has been included in all versions of Microsoft Windows since Windows 1.0 in 1985.

Notepad2

Notepad2 is a free and open-source text editor for Microsoft Windows, released under a BSD software license. It was written by Florian Balmer using the Scintilla editor component, and it was first publicly released in April 2004. Balmer based Notepad2 on the principles of Notepad: small, fast, and usable.

It features syntax highlighting for many programming languages: ASP, assembly language, C, C++, C#, Common Gateway Interface (CGI), Cascading Style Sheets (CSS), HTML, Java, JavaScript, NSIS, Pascal, Perl, PHP, Python, SQL, Visual Basic (VB), VBScript, XHTML, and XML. It also features syntax highlighting for the following file formats: BAT, DIFF, INF, INI, REG, and configuration files (.properties).Notepad2 also has several other features:

Auto indentation

Bracket matching

Character encoding conversion between ASCII, UTF-8, and UTF-16 formats

Multiple undo/redo; rectangular block selection

Newline format conversion, between DOS (CR/LF), Unix (LF), and Macintosh (CR) formats

Regular expression-based find and replaceBalmer has stated that some features will probably never be implemented in Notepad2, as they are beyond his design goal of a simple Notepad-like application. These include Notepad2's most requested feature, a multiple document interface. Other missing features such as code folding, file associations and bookmarks are available through modifications, linked from the homepage. Starting with version 4.2.25-rc6, an x64 build is also offered.

Plane (Unicode)

In the Unicode standard, a plane is a continuous group of 65,536 (216) code points. There are 17 planes, identified by the numbers 0 to 16, which corresponds with the possible values 00–1016 of the first two positions in six position hexadecimal format (U+hhhhhh). Plane 0 is the Basic Multilingual Plane (BMP), which contains most commonly-used characters. The higher planes 1 through 16 are called "supplementary planes". The very last code point in Unicode is the last code point in plane 16, U+10FFFF. As of Unicode version 12.1, six of the planes have assigned code points (characters), and four are named.

The limit of 17 planes is due to UTF-16, which can encode 220 code points (16 planes) as pairs of words, plus the BMP as a single word.. UTF-8 was designed with a much larger limit of 231 (2,147,483,648) code points (32,768 planes), and can encode 221 (2,097,152) code points (32 planes) even under the current limit of 4 bytes.The 17 planes can accommodate 1,114,112 code points. Of these, 2,048 are surrogates (used to make the pairs in UTF-16), 66 are non-characters, and 137,468 are reserved for private use, leaving 974,530 for public assignment.

Planes are further subdivided into Unicode blocks, which, unlike planes, do not have a fixed size. The 300 blocks defined in Unicode 12.1 cover 25% of the possible code point space, and range in size from a minimum of 16 code points (fourteen blocks) to a maximum of 65,536 code points (Supplementary Private Use Area-A and -B, which constitute the entirety of planes 15 and 16). For future usage, ranges of characters have been tentatively mapped out for most known current and ancient writing systems.

Standard Compression Scheme for Unicode

The Standard Compression Scheme for Unicode (SCSU) is a Unicode Technical Standard for reducing the number of bytes needed to represent Unicode text, especially if that text uses mostly characters from one or a small number of per-language character blocks. It does so by dynamically mapping values in the range 128–255 to offsets within particular blocks of 128 characters. The initial conditions of the encoder mean that existing strings in ASCII and ISO-8859-1 that do not contain C0 control codes other than NULL TAB CR and LF can be treated as SCSU strings. Since most alphabets do reside in blocks of contiguous Unicode codepoints, texts that use small alphabets and either ASCII punctuation or punctuation that fits within the window for the main alphabet can be encoded at one byte per character (plus setup overhead, which for common languages is often only 1 byte), most other punctuation can be encoded at 2 bytes per symbol through non-locking shifts. SCSU can also switch to UTF-16 internally to handle non-alphabetic languages.

Symbian OS, an operating system for mobile phones and other mobile devices, uses SCSU to serialise strings.

Reuters, the organization that floated the first draft of SCSU, is believed to use SCSU internally.

SQL Server 2008 R2 uses SCSU to compress Unicode values stored in nchar(n) and nvarchar(n) columns, achieving space savings between 15% and 50%, depending on the language of the data.

Theta

Theta (UK: , US: ; uppercase Θ or ϴ, lowercase θ (which resembles digit 0 with horizontal line) or ϑ; Ancient Greek: θῆτα thē̂ta [tʰɛ̂ːta]; Modern: θήτα thī́ta [ˈθita]) is the eighth letter of the Greek alphabet, derived from the Phoenician letter Teth . In the system of Greek numerals it has the value 9.

UTF-32

UTF-32 stands for Unicode Transformation Format in 32 bits. It is a protocol to encode Unicode code points that uses exactly 32 bits per Unicode code point (but a number of leading bits must be zero as there are fewer than 221 Unicode code points). UTF-32 is a fixed-length encoding, in contrast to all other Unicode transformation formats, which are variable-length encodings. Each 32-bit value in UTF-32 represents one Unicode code point and is exactly equal to that code point's numerical value.

The main advantage of UTF-32 is that the Unicode code points are directly indexed. Finding the Nth code point in a sequence of code points is a constant time operation. In contrast, a variable-length code requires sequential access to find the Nth code point in a sequence. This makes UTF-32 a simple replacement in code that uses integers that are incremented by one to examine each location in a string, as was commonly done for ASCII.

The main disadvantage of UTF-32 is that it is space-inefficient, using four bytes per code point. Characters beyond the BMP are relatively rare in most texts, and can typically be ignored for sizing estimates. This makes UTF-32 close to twice the size of UTF-16. It can be up to four times the size of UTF-8 depending on how many of the characters are in the ASCII subset.

UTF-7

UTF-7 (7-bit Unicode Transformation Format) is a variable-length character encoding that was proposed for representing Unicode text using a stream of ASCII characters. It was originally intended to provide a means of encoding Unicode text for use in Internet E-mail messages that was more efficient than the combination of UTF-8 with quoted-printable.

UTF-7 is used by less than 0.003% of websites. UTF-8 has since 2009 been the dominant encoding (of any kind, not just of Unicode encodings) for the World Wide Web (and declared mandatory "for all things" by WHATWG).

UTF-8

UTF-8 is a variable width character encoding capable of encoding all 1,112,064 valid code points in Unicode using one to four 8-bit bytes. The encoding is defined by the Unicode Standard, and was originally designed by Ken Thompson and Rob Pike. The name is derived from Unicode (or Universal Coded Character Set) Transformation Format – 8-bit.It was designed for backward compatibility with ASCII. Code points with lower numerical values, which tend to occur more frequently, are encoded using fewer bytes. The first 128 characters of Unicode, which correspond one-to-one with ASCII, are encoded using a single octet with the same binary value as ASCII, so that valid ASCII text is valid UTF-8-encoded Unicode as well. Since ASCII bytes do not occur when encoding non-ASCII code points into UTF-8, UTF-8 is safe to use within most programming and document languages that interpret certain ASCII characters in a special way, such as "/" in filenames, "\" in escape sequences, and "%" in printf.

Since 2009, UTF-8 has been the dominant encoding (of any kind, not just of Unicode encodings) for the World Wide Web (and declared mandatory "for all things" by WHATWG) and as of May 2019 accounts for 93.5% of all web pages (some of which are simply ASCII, as it is a subset of UTF-8) and 95% of the top 1,000 highest ranked web pages. The next-most popular multi-byte encodings, Shift JIS and GB 2312, have 0.4% and 0.3% respectively. The Internet Mail Consortium (IMC) recommended that all e-mail programs be able to display and create mail using UTF-8, and the W3C recommends UTF-8 as the default encoding in XML and HTML.

UTF-EBCDIC

UTF-EBCDIC is a character encoding used to represent Unicode characters. It is meant to be EBCDIC-friendly, so that legacy EBCDIC applications on mainframes may process the characters without much difficulty. Its advantages for existing EBCDIC-based systems are similar to UTF-8's advantages for existing ASCII-based systems. Details on UTF-EBCDIC are defined in Unicode Technical Report #16.

To produce the UTF-EBCDIC encoded version of a series of Unicode code points, an encoding based on UTF-8 (known in the specification as UTF-8-Mod) is applied first (creating what the specification calls an I8 sequence). The main difference between this encoding and UTF-8 is that it allows Unicode code points U+0080 through U+009F (the C1 control codes) to be represented as a single byte and therefore later mapped to corresponding EBCDIC control codes. In order to achieve this, UTF-8-Mod uses 101XXXXX instead of 10XXXXXX as the format for trailing bytes in a multi-byte sequence. As this can only hold 5 bits rather than 6, the UTF-8-Mod encoding of codepoints above U+009F is generally larger than the UTF-8 encoding.

The UTF-8-Mod transformation leaves the data in an ASCII-based format (for example, U+0041 "A" is still encoded as 01000001), so each byte is fed through a reversible (one-to-one) lookup table to produce the final UTF-EBCDIC encoding. For example, 01000001 in this table maps to 11000001; thus the UTF-EBCDIC encoding of U+0041 (Unicode's "A") is 0xC1 (EBCDIC's "A").

This encoding form is rarely used, even on the EBCDIC-based mainframes for which it was designed. IBM EBCDIC-based mainframe operating systems, such as z/OS, usually use UTF-16 for complete Unicode support. For example, DB2 UDB, COBOL, PL/I, Java and the IBM XML toolkit support UTF-16 on IBM mainframes.

Unicode

Unicode is a computing industry standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems. The standard is maintained by the Unicode Consortium, and as of May 2019 the most recent version, Unicode 12.1, contains a repertoire of 137,994 characters covering 150 modern and historic scripts, as well as multiple symbol sets and emoji. The character repertoire of the Unicode Standard is synchronized with ISO/IEC 10646, and both are code-for-code identical.

The Unicode Standard consists of a set of code charts for visual reference, an encoding method and set of standard character encodings, a set of reference data files, and a number of related items, such as character properties, rules for normalization, decomposition, collation, rendering, and bidirectional display order (for the correct display of text containing both right-to-left scripts, such as Arabic and Hebrew, and left-to-right scripts).Unicode's success at unifying character sets has led to its widespread and predominant use in the internationalization and localization of computer software. The standard has been implemented in many recent technologies, including modern operating systems, XML, Java (and other programming languages), and the .NET Framework.

Unicode can be implemented by different character encodings. The Unicode standard defines UTF-8, UTF-16, and UTF-32, and several other encodings are in use. The most commonly used encodings are UTF-8, UTF-16, and UCS-2 (without full support for Unicode), a precursor of UTF-16; GB18030 is standardized in China and implements Unicode fully, while not an official Unicode standard.

UTF-8, the dominant encoding on the World Wide Web (used in over 93% of websites), uses one byte for the first 128 code points, and up to 4 bytes for other characters. The first 128 Unicode code points are the ASCII characters, which means that any ASCII text is also a UTF-8 text.

UCS-2 uses two bytes (16 bits) for each character but can only encode the first 65,536 code points, the so-called Basic Multilingual Plane (BMP). With 1,114,112 code points on 17 planes being possible, and with over 137,000 code points defined so far, UCS-2 is only able to represent less than half of all encoded Unicode characters. Therefore, UCS-2 is outdated, though still widely used in software. UTF-16 extends UCS-2, by using the same 16-bit encoding as UCS-2 for the Basic Multilingual Plane, and a 4-byte encoding for the other planes. As long as it contains no code points in the reserved range U+D800–U+DFFF, a UCS-2 text is a valid UTF-16 text.

UTF-32 (also referred to as UCS-4) uses four bytes for each character. Like UCS-2, the number of bytes per character is fixed, facilitating character indexing; but unlike UCS-2, UTF-32 is able to encode all Unicode code points. However, because each character uses four bytes, UTF-32 takes significantly more space than other encodings, and is not widely used.

Unicode in Microsoft Windows

Microsoft was one of the first companies to implement Unicode (back then UCS-2, that evolved into UTF-16) in their products, while they are still in 2019 improving their operating system support for UTF-8. Windows NT was the first operating system that used "wide characters" in system calls. Using the UCS-2 encoding scheme at first, it was upgraded to UTF-16 starting with Windows 2000, allowing a representation of additional planes with surrogate pairs.

Universal Coded Character Set

The Universal Coded Character Set (UCS) is a standard set of characters defined by the International Standard ISO/IEC 10646, Information technology — Universal Coded Character Set (UCS) (plus amendments to that standard), which is the basis of many character encodings. The latest version contains over 136,000 abstract characters, each identified by an unambiguous name and an integer number called its code point. This ISO/IEC 10646 standard is maintained in conjunction with The Unicode Standard ("Unicode"), and they are code-for-code identical.

Characters (letters, numbers, symbols, ideograms, logograms, etc.) from the many languages, scripts, and traditions of the world are represented in the UCS with unique code points. The inclusiveness of the UCS is continually improving as characters from previously unrepresented writing systems are added.

The UCS has over 1.1 million possible code points available for use/allocation, but only the first 65,536 (the Basic Multilingual Plane, or BMP) had entered into common use before 2000. This situation began changing when the People's Republic of China (PRC) ruled in 2006 that all software sold in its jurisdiction would have to support GB 18030. This required software intended for sale in the PRC to move beyond the BMP.

The system deliberately leaves many code points not assigned to characters, even in the BMP. It does this to allow for future expansion or to minimize conflicts with other encoding forms.

Windows-1250

Windows-1250 is a code page used under Microsoft Windows to represent texts in Central European and Eastern European languages that use Latin script, such as Polish, Czech, Slovak, Hungarian, Slovene, Bosnian, Croatian, Serbian (Latin script), Romanian (before 1993 spelling reform) and Albanian. It may also be used with the German language; German-language texts encoded with Windows-1250 and Windows-1252 are identical.

In modern applications UTF-8 or UTF-16 is a preferred encoding; 0.1% of all web pages use Windows-1250 since August 2017.Windows-1250 is similar to ISO-8859-2 and has all the printable characters it has and more. However a few of them are rearranged (unlike Windows-1252, which keeps all printable characters from ISO-8859-1 in the same place). Most of the rearrangements seem to have been done to keep characters shared with Windows-1252 in the same place as in Windows-1252 but three of the characters moved (Ą, Ľ, ź) cannot be explained this way, since those do not occur in Windows-1252 and could have been put in the same positions as in ISO-8859-2 if ˇ had been put e.g. at 9F. The part that differs from ISO-8859-2 is compared with Windows-1252 in the table below:

Note: The shaded positions at A2, A3, AA, AF, B2, B3, BA, BD and BF are the same as in ISO-8859-2. Positions which are identical in Windows-1252 and Windows-1250 are not shown.

Windows-1255

Windows-1255 is a code page used under Microsoft Windows to write Hebrew. It is an almost compatible superset of ISO 8859-8 – most of the symbols are in the same positions (except for A4, which is 'sheqel sign' in Windows-1255 but 'generic currency sign' in ISO 8859-8 and except for DF, which is undefined in Windows-1255 but 'double low line' in ISO 8859-8), but Windows-1255 adds vowel-points and other signs in lower positions.

Modern applications prefer Unicode to Windows-1255, especially on the Internet; meaning UTF-8, the dominant encoding for web pages (or UTF-16, while not on the Internet for security reasons). Windows-1255 is used by less that 0.1% of websites.

Unicode
Code points
Characters
Processing
On pairs of
code points
Usage
Related standards
Related topics
Early telecommunications
ISO/IEC 8859
Bibliographic use
National standards
EUC
ISO/IEC 2022
MacOS code pages("scripts")
DOS code pages
IBM AIX code pages
IBM Apple MacIntoshemulations
IBM Adobe emulations
IBM DEC emulations
IBM HP emulations
Windows code pages
EBCDIC code pages
Platform specific
Unicode / ISO/IEC 10646
TeX typesetting system
Miscellaneous code pages
Related topics

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.