NAME ^

docs/pdds/pdd28_strings.pod - Parrot Strings

ABSTRACT ^

This PDD describes the conventions for strings in Parrot, including but not limited to support for multiple character sets, encodings and languages.

VERSION ^

$Revision$

DEFINITIONS ^

Character ^

A character is the abstract description of a symbol. It's the smallest chunk of text a computer knows how to deal with. Internally to the computer, a character (just like everything else) is a number, so a few further definitions are needed.

Character Set ^

The Unicode Standard prefers the concepts of character repertoire (a collection of characters) and character code (a mapping which tells you what number represents which character in the repertoire). Character set is commonly used to mean the standard which defines both a repertoire and a code.

Codepoint ^

A codepoint is the numeric representation of a character according to a given character set. So in ASCII, the character A has codepoint 0x41.

Encoding ^

An encoding determines how a codepoint is represented inside a computer. Simple encodings like ASCII define that the codepoints 0-127 simply live as their numeric equivalents inside an eight-bit bytes. Other fixed-width encodings like UTF-16 use more bytes to encode more codepoints. Variable-width encodings like UTF-8 use one byte for codepoints 0-127, two bytes for codepoints 127-2047, and so on.

Character sets and encodings are related but separate concepts. An encoding is the lower-level representation of a string's data, whereas the character set determines higher-level semantics. Typically, character set functions will ask a string's encoding functions to retrieve data from the string, and then process the retrieved data.

Combining Character ^

A combining character is a Unicode concept. It is a character which modifies the preceding character. For instance, accents, lines, circles, boxes, etc. which are not to be displayed on their own, but to be composed with the preceding character.

Grapheme ^

In linguistics, a grapheme is a single symbol in a writing system (letter, number, punctuation mark, kanji, hiragana, Arabic glyph, Devanagari symbol, etc), including any modifiers (diacritics, etc).

The Unicode Standard defines a grapheme cluster (commonly simplified to just grapheme) as one or more characters forming a visible whole when displayed, in other words, a bundle of a character and all of its combining characters. Because graphemes are the highest-level abstract idea of a "character", they're useful for converting between character sets.

Normalization Form ^

A normalization form standardizes the representation of a string by transforming a sequence of combining characters into a more complex character (composition), or by transforming a complex character into a sequence of composing characters (decomposition). The decomposition forms also define a standard order for the composing characters, to allow string comparisons. The Unicode Standard defines four normalization forms: NFC and NFKC are composition, NFD and NFKD are decomposition. See Unicode Normalization Forms for more details.

Grapheme Normalization Form ^

Grapheme normalization form (NFG) is a normalization which allocates exactly one codepoint to each grapheme.

DESCRIPTION ^

IMPLEMENTATION ^

Parrot was designed from the outset to support multiple string formats: multiple character sets and multiple encodings. We don't standardize on Unicode internally, converting all strings to Unicode strings, because for the majority of use cases it's still far more efficient to deal with whatever input data the user sends us.

Consumers of Parrot strings need to be aware that there is a plurality of string encodings inside Parrot. (Producers of Parrot strings can do whatever is most efficient for them.) To put it in simple terms: if you find yourself writing *s++ or any other C string idioms, you need to stop and think if that's what you really mean. Not everything is byte-based anymore.

Grapheme Normalization Form ^

Unicode characters can be expressed in a number of different ways according to the Unicode Standard. This is partly to do with maintaining compatibility with existing character encodings. For instance, in Serbo-Croatian and Slovenian, there's a letter which looks like an i without the dot but with two grave (`) accents (ȉ). Unicode can represent this letter as a composed character 0x209, also known as LATIN SMALL LETTER I WITH DOUBLE GRAVE, which does the job all in one go. It can also represent this letter as a decomposed sequence: LATIN SMALL LETTER I (0x69) followed by COMBINING DOUBLE GRAVE ACCENT (0x30F). We use the term grapheme to refer to a "letter" whether it's represented by a single codepoint or multiple codepoints.

String operations on this kind of variable-byte encoding can be complex and expensive. Operations like comparison and traversal require a series of computations and lookaheads, because any given grapheme may be a sequence of combining characters. The Unicode Standard defines several "normalization forms" that help with this problem. Normalization Form C (NFC), for example, decomposes everything, then re-composes as much as possible. So if you see the integer stream 0x69 0x30F, it needs to be replaced by 0x209. However, Unicode's normalization forms don't go quite far enough to completely solve the problem. For example, Serbo-Croat is sometimes also written with Cyrillic letters rather than Latin letters. Unicode doesn't have a single composed character for the Cyrillic equivalent of the Serbo-Croat LATIN SMALL LETTER I WITH DOUBLE GRAVE, so it is represented as a decomposed pair CYRILLIC SMALL LETTER I (0x438) with COMBINING DOUBLE GRAVE ACCENT (0x30F). This means that even in the most normalized Unicode form, string manipulation code must always assume a variable-byte encoding, and use expensive lookaheads. The cost is incurred on every operation, though the particular string operated on might not contain combining characters. It's particularly noticeable in parsing and regular expression matches, where backtracking operations may re-traverse the characters of a simple string hundreds of times.

In order to reduce the cost of variable-byte operations and simplify some string manipulation tasks, Parrot defines an additional normalization: Normalization Form G (NFG). In NFG, every grapheme is guaranteed to be represented by a single codepoint. Graphemes that don't have a single codepoint representation in Unicode are given a dynamically generated codepoint unique to the NFG string.

An NFG string is a sequence of signed 32-bit Unicode codepoints. It's equivalent to UCS-4 except for the normalization form semantics. UCS-4 specifies an encoding for Unicode codepoints from 0 to 0x7FFFFFFF. In other words, any codepoints with the first bit set are undefined. NFG interprets the unused bit as a sign bit, and reserves all negative codepoints as dynamic codepoints. A negative codepoint acts as an index into a lookup table, which maps between a dynamic codepoint and its associated decomposition.

In practice, this goes as follows: When our Russified Serbo-Croat string is converted to NFG, it is normalized to a single character having the codepoint 0xFFFFFFFFF (in other words, -1 in 2's complement). At the same time, Parrot inserts an entry into the string's grapheme table at array index -1, containing the Unicode decomposition of the grapheme 0x00000438 0x000000030F.

Parrot will provide both grapheme-aware and codepoint-aware string operations, such as iterators for string traversal and calculations of string length. Individual language implementations can choose between the two types of operations depending on whether their string semantics are character-based or codepoint-based. For languages that don't currently have Unicode support, the grapheme operations will allow them to safely manipulate Unicode data without changing their string semantics.

Advantages

Applications that don't care about graphemes can handle a NFG codepoint in a string as if it's any other character. Only applications that care about the specific properties of Unicode characters need to take the load of peeking inside the grapheme table and reading the decomposition.

Using negative numbers for dynamic codepoints allows Parrot to check if a particular codepoint is dynamic using a single sign-comparison operation. It also means that NFG can be used without conflict on encodings from 7-bit (signed 8-bit integers) to 63-bit (using signed 64-bit integers) and beyond.

Because any grapheme from any character set can be represented by a single NFG codepoint, NFG strings are useful as an intermediate representation for converting between string types.

Disadvantages

A 32-bit encoding is quite large, considering the fact that the Unicode codespace only requires up to 0x10FFFF. The Unicode Consortium's FAQ notes that most Unicode interfaces use UTF-16 instead of UTF-32, out of memory considerations. This means that although Parrot will use 32-bit NFG strings for optimizations within operations, for the most part individual users should use the native character set and encoding of their data, rather than using NFG strings directly.

The conceptual cost of adding a normalization form beyond those defined in the Unicode Standard has to be considered. However, to fully support Unicode, Parrot already needs to keep track of what normalization form a given string is in, and provide functions to convert between normalization forms. The conceptual cost of one additional normalization form is relatively small.

The grapheme table

When constructing strings in NFG, graphemes not expressible as a single character in Unicode are represented by a dynamic codepoint index into the string's grapheme table. When Parrot comes across a multi-codepoint grapheme, it must first determine whether or not the grapheme already has an entry in the grapheme table. Therefore the table cannot strictly be an array, as that would make lookup inefficient. The grapheme table is represented, then, as both an array and a hash structure. The array interface provides forward-lookup and the hash interface reverse lookup. Converting a multi-codepoint grapheme into a dynamic codepoint can be demonstrated with the following Perl 5 pseudocode, for the grapheme 0x438 0x30F:

   $codepoint = ($grapheme_lookup->{0x438}{0x30F} ||= do {
                   push @grapheme_table, "\x{438}\x{30F}";
                   ~ $#grapheme_table;
                });
   push @string, $codepoint;

String API ^

Strings have the following structure:

  struct parrot_string_t {
      UnionVal                      cache;
      Parrot_UInt                   flags;
      UINTVAL                       bufused;
      UINTVAL                       hashval;
      UINTVAL                       strlen;
      char                         *strstart;
      const struct _encoding       *encoding;
      const struct _charset        *charset;
      const struct _normalization  *normalization;
  };

Deprecation note: the enum parrot_string_representation_t will be removed.

The current string functions will on the whole be maintained, with some modifications for the addition of the NFG string format.

Conversions between normalization form, encoding, and charset

Conversion will be done with a function called string_grapheme_copy:

    INTVAL string_grapheme_copy(STRING *src, STRING *dst)

Converting a string from one format to another involves creating a new empty string with the required attributes, and passing the source string and the new string to string_grapheme_copy. This function iterates through the source string one grapheme at a time, using the character set function pointer get_grapheme (which may read ahead multiple characters with strings that aren't in NFG). For each source grapheme, the function will call set_grapheme on the destination string (which may append multiple characters in non-NFG strings). This conversion effectively uses an intermediate NFG representation.

String PMC API ^

REFERENCES ^

http://sirviente.9grid.es/sources/plan9/sys/doc/utf.ps - Plan 9's Runes are not dissimilar to NFG strings, and this is a good introduction to the Unicode world.

http://www.unicode.org/reports/tr15/ - The Unicode Consortium's explanation of different normalization forms.

http://unicode.org/reports/tr29/ - "grapheme clusters" in the Unicode Standard Annex

"Unicode: A Primer", Tony Graham - Arguably the most readable book on how Unicode works.

"Advanced Perl Programming", Chapter 6, "Unicode"


parrot