chardetng 0.1.9

A character encoding detector for legacy Web content
Documentation

chardetng

crates.io docs.rs Apache 2 / MIT dual-licensed

A character encoding detector for legacy Web content.

Licensing

Please see the file named COPYRIGHT.

Documentation

Generated API documentation is available online.

Purpose

The purpose of this detector is user retention for Firefox by ensuring that the long tail of the legacy Web is not more convenient to use in Chrome than in Firefox. (Chrome deployed ced, which left Firefox less convenient to use until the deployment of this detector.)

About the Name

chardet was the name of Mozilla's old encoding detector. I named this one chardetng, because this the next generation of encoding detector in Firefox. There is no code reuse from the old chardet.

Optimization Goals

This crate aims to be more accurate than ICU, more complete than chardet, more explainable and modifiable than compact_enc_det (aka. ced), and, in an application that already depends on encoding_rs for other reasons, smaller in added binary footprint than compact_enc_det.

Principle of Operation

In general chardetng prefers to do negative matching (rule out possibilities from the set of plausible encodings) than to do positive matching. Since negative matching is insufficient, there is positive matching, too.

  • Except for ISO-2022-JP, pairs of ASCII bytes never contribute to the detection, which has the effect of ignoring HTML syntax without an HTML-aware state machine.
  • A single encoding error disqualifies an encoding from the set of possible outcomes. Notably, as the length of the input increases, it becomes increasingly improbable for the input to be valid according to a legacy CJK encoding without being intended as such. Also, there are single-byte encodings that have unmapped bytes in areas that are in active use by other encodings, so such bytes narrow the set of possibilities very effectively.
  • A single occurrence of a C1 control character disqualifies an encoding from possible outcomes.
  • The first non-ASCII character being a half-width katakana character disqualifies an encoding. (This is very effective for deciding between Shift_JIS and EUC-JP.)
  • For single-byte encodings, character pairs are given scores according to their relative frequencies in the applicable Wikipedias.
  • There's a variety of smaller penalty rules, such as:
    • For encodings for bicameral scripts, having an upper-case letter follow a lower-case letter is penalized.
    • For Latin encodings, having three non-ASCII letters in a row is penalized a little and having four or more is penalized a lot.
    • For non-Latin encodings, having a non-Latin letter right next to a Latin letter is penalized.
    • For single-byte encodings, having a character pair (excluding pairs where both characters are ASCII) that never occurs in the Wikipedias for the applicable languages is heavily penalized.

Notes About Encodings

Known Problems

  • GBK detection is less accurate than in ced for short titles consisting of fewer than six hanzi. This is mostly due to the design that prioritizes optimizing binary size over accuracy on very short inputs.
  • Thai detection is inaccurate for short inputs.
  • windows-1257 detection is very inaccurate. (This detector currently doesn't use trigrams. ced uses 8 KB of trigram data to solve this.)
  • On non-generic domains, some encodings that are confusable with the legacy encodings native to the TLD are excluded from guesses outright unless the input is invalid according to all the TLD-native encodings.

Roadmap

  • Investigate parallelizing the feed method using Rayon.
  • Improve windows-874 detection for short inputs.
  • Improve GBK detection for short inputs.
  • Reorganize the frequency data for telling short GBK, EUC-JP, and EUC-KR inputs apart.
  • Make Lithuanian and Latvian detection on generic domains a lot more accurate (likely requires looking at trigrams).
  • Tune Central European detection.
  • Tune the penalties applied to confusable encodings on non-generic TLDs to make detection of confusable encodings possible on non-generic TLDs.
  • Reduce the binary size by not storing the scoring for implausible-next-to-alphabetic character classes.
  • Reduce the binary size by classifying ASCII algorithmically.
  • Reduce the binary size by not storing the scores for C1 controls.

Release Notes

0.1.9

  • Fix a bug in ASCII prefix skipping. (Was introduced in 0.1.7.)

0.1.8

  • Avoid detecting English with no-break spaces as GBK or EUC-KR.

0.1.7

  • Avoid misdetecting windows-1252 English as windows-1254.
  • Avoid misdetecting windows-1252 English as IBM866.
  • Improve Chinese and Japanese detection by not giving single-byte encodings score for letter next to digit.
  • Improve Italian, Portuguese, Castilian, Catalan, and Galician detection by taking into account ordinal indicator use.
  • Reduce lookup table size.

0.1.6

  • Tune Central European detection.

0.1.5

  • Improve Thai accuracy a lot.
  • Improve accuracy of some languages a bit.
  • Remove unused Hebrew ASCII table.

0.1.4

  • Properly take into account non-ASCII bytes at word boundaries for windows-1252. (Especially relevant for Italian and Catalan.)
  • Move Estonian from the Baltic model to the Western model. This improves overall Estonian detection but causes š and ž encoded as windows-1257, ISO-8859-13, or ISO-8859-4 to get misdecoded. (It would be possible to add a post-processing step to adjust for š and ž, but this would cause reloads given the way chardetng is integrated with Firefox.)
  • Properly classify letters that ISO-8859-4 has but windows-1257 doesn't have in order to avoid misdetecting non-ISO-8859-4 input as ISO-8859-4.
  • Improve character classification of windows-1254.
  • Avoid classifying byte 0xA1 or above as space-like.
  • Reduce binary size by collapsing similar character classes.

0.1.3

  • Return TLD-affiliated encoding if UTF-8 is valid but prohibited.

0.1.2

  • Return UTF-8 if valid and allowed even if all-ASCII.
  • Return windows-1252 if UTF-8 valid and prohibited, because various test cases require this.

0.1.1

  • Detect Visual Hebrew more often.

0.1.0

  • Initial release.