rust-mando
Convert Chinese characters (漢字) to pīnyīn, with word segmentation powered by jieba-rs for accurate heteronym resolution. Pronunciation lookups use a compact binary table built from CC-CEDICT at compile time.
Why word segmentation?
Mandarin Chinese is written without spaces, and many characters have multiple
pronunciations (多音字, heteronyms) depending on the word they belong to.
Without segmentation, 音樂 and 快樂 both just see 樂 in isolation and may
produce the wrong reading. rust-mando segments text into words first, then
resolves each character's pronunciation in context.
音樂 → yīn yuè (樂 read as yuè in 音樂)
中國 → Zhōng guó (not Zhòng guó — 中 is correctly read as zhōng in 中國)
快樂 → kuài lè (not kuài yuè — 樂 is correctly read as lè in 快樂)
Usage
Output targets
| Build | ABI |
|---|---|
cargo xtask build |
wasm-minimal-protocol |
cargo demo <cmd> <input> |
native CLI |
cargo test |
native unit tests |
Style reference
style |
Example (中) |
Description |
|---|---|---|
"marks" |
zhōng |
Tone diacritic on the vowel (default) |
"numbers" |
zhong1 |
Tone number at the end of the syllable |
Any unrecognised style string falls back to "marks".
As a Typst plugin
After building the WASM module, copy
rust_mando.wasm next to your .typ file and load it with plugin():
#let mando = plugin("rust_mando.wasm")
// Flat pīnyīn string — non-Chinese tokens are omitted
#str(mando.pinyin_flat(bytes("北京歡迎你"), bytes("marks")))
// → "běi jīng huān yíng nǐ"
// Segmented JSON — pinyin is null for non-Chinese tokens
#json(mando.pinyin_segmented(bytes("你好world!"), bytes("marks")))
// → [
// {"word": "你好", "pinyin": ["nǐ", "hǎo"]},
// {"word": "world", "pinyin": null},
// {"word": "!", "pinyin": null},
// ]
As a native CLI
| Command | Description |
|---|---|
flat |
Space-separated pīnyīn (marks + numbers) |
segment |
Word-boundary breakdown with pīnyīn per segment |
As a library
[]
= "0.1.2"
to_pinyin_flat(text: &str, style: &str) -> String
Returns a space-separated string of pīnyīn syllables. Non-Chinese tokens (Latin words, punctuation, whitespace) are omitted from the output entirely.
use to_pinyin_flat;
to_pinyin_flat; // "běi jīng huān yíng nǐ"
to_pinyin_flat; // "bei3 jing1 huan1 ying2 ni3"
to_pinyin_flat; // "nǐ hǎo" — non-Chinese omitted
to_pinyin_segmented(text: &str, style: &str) -> Vec<Segment>
Returns one Segment per jieba word boundary. pinyin is None (JSON
null) when the word contains no Chinese characters.
use ;
let segs = to_pinyin_segmented;
// [
// Segment { word: "自然語言", pinyin: Some(["zì", "rán", "yǔ", "yán"]) },
// Segment { word: "處理", pinyin: Some(["chǔ", "lǐ"]) },
// ]
let mixed = to_pinyin_segmented;
// [
// Segment { word: "你好", pinyin: Some(["nǐ", "hǎo"]) },
// Segment { word: "world", pinyin: None },
// Segment { word: "!", pinyin: None },
// ]
// Access pinyin — use as_deref().unwrap_or(&[]) to handle None gracefully:
for seg in &segs
// 自然語言 → zì rán yǔ yán
// 處理 → chǔ lǐ
Related projects
- rust-canto — Cantonese romanisation (Jyutping) Typst plugin by the same author
- jieba-wasm — WASM bindings for jieba-rs for web apps
- pinyin — lightweight character-by-character pīnyīn crate (no word segmentation)
- chinese_dictionary — a crate that inspired me to use CC-CEDICT for pinyin conversion.
License
MIT