gtokenizers
is library for fast and flexible tokenization of genomic data to be used in bioinformatic machine learning models. The purpose of this library is to provide a simple and highly performant interface for tokenizing genomic data in a way that is compatible with modern machine learning workflows.
Installation
Run the following in your terminal:
cargo add gtokenizers
or add the following to your Cargo.toml
file:
= "0.0.11"
Quickstart
You can create a tokenizer from a universe (or vocab) file like so:
use TreeTokenizer;
use RegionSet;
use Path;
let vocab_path = new;
let t = from;
let rs = from;
let tokens = t.tokenize;
for t in tokens
Additional information
This crate is still in early development. We will be adding more features and documentation in the near future. If you have any questions or suggestions, please feel free to open an issue or a pull request.