`

HYP_00001_UTF-8 Ext

阅读更多
FROM:
http://en.wikipedia.org/wiki/UTF-8

UTF-8
From Wikipedia, the free encyclopedia

UTF-8 (UCS Transformation Format — 8-bit[1]) is a multibyte character encoding for Unicode. Like UTF-16 and UTF-32, UTF-8 can represent every character in the Unicode character set. Unlike them, it is backward-compatible with ASCII and avoids the complications of endianness and byte order marks (BOM). For these and other reasons, UTF-8 has become the dominant character encoding for the World-Wide Web, accounting for more than half of all Web pages.[2][3][4]  The Internet Engineering Task Force (IETF) requires all Internet protocols to identify the encoding used for character data, and the supported character encodings must include UTF-8.[5]  The Internet Mail Consortium (IMC) recommends that all e‑mail programs be able to display and create mail using UTF-8.[6]  UTF-8 is also increasingly being used as the default character encoding in operating systems, programming languages, APIs, and software applications.

UTF-8 encodes each of the 1,112,064[7] code points in the Unicode character set using one to four 8-bit bytes (termed “octets” in the Unicode Standard). Code points with lower numerical values (i. e., earlier code positions in the Unicode character set, which tend to occur more frequently in practice) are encoded using fewer bytes,[8] making the encoding scheme reasonably efficient. In particular, the first 128 characters of the Unicode character set, which correspond one-to-one with ASCII, are encoded using a single octet with the same binary value as the corresponding ASCII character, making valid ASCII text valid UTF-8-encoded Unicode text as well.

The official IANA code for the UTF-8 character encoding is UTF-8.[9]
Contents

    1 History
    2 Design
    3 Description
        3.1 Codepage layout
        3.2 Invalid byte sequences
        3.3 Invalid code points
    4 Official name and variants
    5 Derivatives
        5.1 CESU-8
        5.2 Modified UTF-8
    6 Byte order mark
    7 Advantages and disadvantages
        7.1 General
            7.1.1 Advantages
            7.1.2 Disadvantages
        7.2 Compared to single-byte encodings
            7.2.1 Advantages
            7.2.2 Disadvantages
        7.3 Compared to other multi-byte encodings
            7.3.1 Advantages
            7.3.2 Disadvantages
        7.4 Compared to UTF-16
            7.4.1 Advantages
            7.4.2 Disadvantages
    8 See also
    9 References
    10 External links

[edit] History

By early 1992 the search was on for a good byte-stream encoding of multi-byte character sets. The draft ISO 10646 standard contained a non-required annex called UTF-1 that provided a byte-stream encoding of its 32-bit code points. This encoding was not satisfactory on performance grounds, but did introduce the notion that bytes in the ASCII range of 0–127 represent themselves in UTF, thereby providing backward compatibility.

In July 1992, the X/Open committee XoJIG was looking for a better encoding. Dave Prosser of Unix System Laboratories submitted a proposal for one that had faster implementation characteristics and introduced the improvement that 7-bit ASCII characters would only represent themselves; all multibyte sequences would include only bytes where the high bit was set.

In August 1992, this proposal was circulated by an IBM X/Open representative to interested parties. Ken Thompson of the Plan 9 operating system group at Bell Labs then made a crucial modification to the encoding to allow it to be self-synchronizing, meaning that it was not necessary to read from the beginning of the string to find code point boundaries. Thompson's design was outlined on September 2, 1992, on a placemat in a New Jersey diner with Rob Pike. The following days, Pike and Thompson implemented it and updated Plan 9 to use it throughout, and then communicated their success back to X/Open.[10]

UTF-8 was first officially presented at the USENIX conference in San Diego, from January 25–29, 1993.

The original specification allowed for sequences of up to six bytes, covering numbers up to 31 bits (the original limit of the Universal Character Set). In November 2003 UTF-8 was restricted by RFC 3629 to four bytes covering only the range U+0000 to U+10FFFF, in order to match the constraints of the UTF-16 character encoding.
[edit] Design

The design of UTF‑8 as originally proposed by Dave Prosser and subsequently modified by Ken Thompson was intended to satisfy two objectives:

    To be backward-compatible with ASCII; and
    To enable encoding of up to at least 231 characters (the theoretical limit of the first draft proposal for the Universal Character Set).

Being backward-compatible with ASCII implied that every valid ASCII character (a 7-bit character set) also be a valid UTF‑8 character sequence, specifically, a one-byte UTF‑8 character sequence whose binary value equals that of the corresponding ASCII character:
Bits Last code point Byte 1
  7 U+007F 0xxxxxxx

Prosser’s and Thompson’s challenge was to extend this scheme to handle code points with up to 31 bits.  The solution proposed by Prosser as subsequently modified by Thompson was as follows:
Bits Last code point Byte 1 Byte 2 Byte 3 Byte 4 Byte 5 Byte 6
  7 U+007F 0xxxxxxx
11 U+07FF 110xxxxx 10xxxxxx
16 U+FFFF 1110xxxx 10xxxxxx 10xxxxxx
21 U+1FFFFF 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
26 U+3FFFFFF 111110xx 10xxxxxx 10xxxxxx 10xxxxxx 10xxxxxx
31 U+7FFFFFFF 1111110x 10xxxxxx 10xxxxxx 10xxxxxx 10xxxxxx 10xxxxxx

The salient features of the above scheme are as follows:

    Every valid ASCII character is also a valid UTF‑8 encoded Unicode character with the same binary value.  (Thus, valid ASCII text is also valid UTF‑8-encoded Unicode text.)
    For every UTF‑8 byte sequence corresponding to a single Unicode character, the first byte unambiguously indicates the length of the sequence in bytes.
    All continuation bytes (byte nos. 2 – 6 in the table above) have 10 as their two most-significant bits (bits 7 – 6); in contrast, the first byte never has 10 as its two most-significant bits.  As a result, it is immediately obvious whether any given byte anywhere in a (valid) UTF‑8 stream represents the first byte of a byte sequence corresponding to a single character, or a continuation byte of such a byte sequence.
    As a consequence of no. 3 above, starting with any arbitrary byte anywhere in a (valid) UTF‑8 stream, it is necessary to back up by only at most five bytes in order to get to the beginning of the byte sequence corresponding to a single character (three bytes in actual UTF‑8 as explained in the next section). If it is not possible to back up, or a byte is missing because of e.g. a communication failure, one single character can be discarded, and the next character be correctly read.
    Starting with the second row in the table above (two bytes), every additional byte extends the maximum number of bits by five (six additional bits from the additional continuation byte, minus one bit lost in the first byte).
    Prosser’s and Thompson’s scheme was sufficiently general to be extended beyond 6-byte sequences (however, this would have allowed FE or FF bytes to occur in valid UTF-8 text — see under Advantages in section "Compared to single byte encodings" below — and indefinite extension would lose the desirable feature that the length of a sequence can be determined from the start byte only).

[edit] Description

UTF-8 is a variable-width encoding, with each character represented by one to four bytes. If the character is encoded by just one byte, the high-order bit is 0 and the other bits give the code value (in the range 0..127). If the character is encoded by a sequence of more than one byte, the first byte has as many leading '1' bits as the total number of bytes in the sequence, followed by a '0' bit, and the succeeding bytes are all marked by a leading "10" bit pattern. The remaining bits in the byte sequence are concatenated to form the Unicode code point value (in the range 80hex to 10FFFFhex). Thus a byte with lead bit '0' is a single-byte code, a byte with multiple leading '1' bits is the first of a multi-byte sequence, and a byte with a leading "10" bit pattern is a continuation byte of a multi-byte sequence. The format of the bytes thus allows the beginning of each sequence to be detected without decoding from the beginning of the string. UTF-16 limits Unicode to 10FFFFhex; therefore UTF-8 is not defined beyond that value, even if it could easily be defined to reach 7FFFFFFFhex.
Code point range Binary code point UTF-8 bytes Example
U+0000 to
U+007F 0xxxxxxx 0xxxxxxx character '$' = code point U+0024
= 00100100
→ 00100100
→ hexadecimal 24
U+0080 to
U+07FF 00000yyy yyxxxxxx 110yyyyy
10xxxxxx character '¢' = code point U+00A2
= 00000000 10100010
→ 11000010 10100010
→ hexadecimal C2 A2
U+0800 to
U+FFFF zzzzyyyy yyxxxxxx 1110zzzz
10yyyyyy
10xxxxxx character '€' = code point U+20AC
= 00100000 10101100
→ 11100010 10000010 10101100
→ hexadecimal E2 82 AC
U+010000 to
U+10FFFF 000wwwzz zzzzyyyy yyxxxxxx 11110www
10zzzzzz
10yyyyyy
10xxxxxx character '
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics