Instant Segment: fast English word segmentation in Rust
Instant Segment is a fast Apache-2.0 library for English word segmentation. It is based on the Python wordsegment project written by Grant Jenks, which is in turn based on code from Peter Norvig's chapter Natural Language Corpus Data from the book Beautiful Data (Segaran and Hammerbacher, 2009).
For the microbenchmark included in this repository, Instant Segment is ~500x faster than the Python implementation. The API was carefully constructed so that multiple segmentations can share the underlying state to allow parallel usage.
How it works
Instant Segment works by segmenting a string into words by selecting the splits with the highest probability given a corpus of words and their occurrences.
For instance, provided that choose and spain occur more frequently than
chooses and pain, and that the pair choose spain occurs more frequently
than chooses pain, Instant Segment can help identify the domain
choosespain.com as ChooseSpain.com which more likely matches user intent.
Read about how we built and improved Instant Segment for use in production at Instant Domain Search to help our users find relevant domains they can register.
Using the library
Python (>= 3.9)
Rust
[]
= "0.8.1"
Examples
The following examples expect unigrams and bigrams to exist. See the
examples (Rust,
Python) to see how to construct
these objects.
=
=
-->
use ;
use HashMap;
let segmenter = new;
let mut search = default;
let words = segmenter
.segment
.unwrap;
println!
-
Check out the tests for more thorough examples: Rust, Python
Testing
To run the tests run the following:
cargo t -p instant-segment --all-features
You can also test the Python bindings with:
make test-python