pub struct CodeSplitter<Sizer>where
Sizer: ChunkSizer,{ /* private fields */ }
code
only.Expand description
Source code splitter. Recursively splits chunks into the largest semantic units that fit within the chunk size. Also will attempt to merge neighboring chunks if they can fit within the given chunk size.
Implementations§
Source§impl<Sizer> CodeSplitter<Sizer>where
Sizer: ChunkSizer,
impl<Sizer> CodeSplitter<Sizer>where
Sizer: ChunkSizer,
Sourcepub fn new(
language: impl Into<Language>,
chunk_config: impl Into<ChunkConfig<Sizer>>,
) -> Result<Self, CodeSplitterError>
pub fn new( language: impl Into<Language>, chunk_config: impl Into<ChunkConfig<Sizer>>, ) -> Result<Self, CodeSplitterError>
Creates a new CodeSplitter
.
use text_splitter::CodeSplitter;
// By default, the chunk sizer is based on characters.
let splitter = CodeSplitter::new(tree_sitter_rust::LANGUAGE, 512).expect("Invalid language");
§Errors
Will return an error if the language version is too old to be compatible with the current version of the tree-sitter crate.
Sourcepub fn chunks<'splitter, 'text: 'splitter>(
&'splitter self,
text: &'text str,
) -> impl Iterator<Item = &'text str> + 'splitter
pub fn chunks<'splitter, 'text: 'splitter>( &'splitter self, text: &'text str, ) -> impl Iterator<Item = &'text str> + 'splitter
Generate a list of chunks from a given text. Each chunk will be up to the chunk_capacity
.
§Method
To preserve as much semantic meaning within a chunk as possible, each chunk is composed of the largest semantic units that can fit in the next given chunk. For each splitter type, there is a defined set of semantic levels. Here is an example of the steps used:
- Split the text by a increasing semantic levels.
- Check the first item for each level and select the highest level whose first item still fits within the chunk size.
- Merge as many of these neighboring sections of this level or above into a chunk to maximize chunk length. Boundaries of higher semantic levels are always included when merging, so that the chunk doesn’t inadvertantly cross semantic boundaries.
The boundaries used to split the text if using the chunks
method, in ascending order:
- Characters
- Unicode Grapheme Cluster Boundaries
- Unicode Word Boundaries
- Unicode Sentence Boundaries
- Ascending depth of the syntax tree. So function would have a higher level than a statement inside of the function, and so on.
Splitting doesn’t occur below the character level, otherwise you could get partial bytes of a char, which may not be a valid unicode str.
use text_splitter::CodeSplitter;
let splitter = CodeSplitter::new(tree_sitter_rust::LANGUAGE, 10).expect("Invalid language");
let text = "Some text\n\nfrom a\ndocument";
let chunks = splitter.chunks(text).collect::<Vec<_>>();
assert_eq!(vec!["Some text", "from a", "document"], chunks);
Sourcepub fn chunk_indices<'splitter, 'text: 'splitter>(
&'splitter self,
text: &'text str,
) -> impl Iterator<Item = (usize, &'text str)> + 'splitter
pub fn chunk_indices<'splitter, 'text: 'splitter>( &'splitter self, text: &'text str, ) -> impl Iterator<Item = (usize, &'text str)> + 'splitter
Returns an iterator over chunks of the text and their byte offsets.
Each chunk will be up to the chunk_capacity
.
See CodeSplitter::chunks
for more information.
use text_splitter::{ChunkCharIndex, CodeSplitter};
let splitter = CodeSplitter::new(tree_sitter_rust::LANGUAGE, 10).expect("Invalid language");
let text = "Some text\n\nfrom a\ndocument";
let chunks = splitter.chunk_indices(text).collect::<Vec<_>>();
assert_eq!(vec![(0, "Some text"), (11, "from a"), (18, "document")], chunks);
Sourcepub fn chunk_char_indices<'splitter, 'text: 'splitter>(
&'splitter self,
text: &'text str,
) -> impl Iterator<Item = ChunkCharIndex<'text>> + 'splitter
pub fn chunk_char_indices<'splitter, 'text: 'splitter>( &'splitter self, text: &'text str, ) -> impl Iterator<Item = ChunkCharIndex<'text>> + 'splitter
Returns an iterator over chunks of the text with their byte and character offsets.
Each chunk will be up to the chunk_capacity
.
See CodeSplitter::chunks
for more information.
This will be more expensive than just byte offsets, and for most usage in Rust, just having byte offsets is sufficient. But when interfacing with other languages or systems that require character offsets, this will track the character offsets for you, accounting for any trimming that may have occurred.
use text_splitter::{ChunkCharIndex, CodeSplitter};
let splitter = CodeSplitter::new(tree_sitter_rust::LANGUAGE, 10).expect("Invalid language");
let text = "Some text\n\nfrom a\ndocument";
let chunks = splitter.chunk_char_indices(text).collect::<Vec<_>>();
assert_eq!(vec![ChunkCharIndex {chunk: "Some text", byte_offset: 0, char_offset: 0}, ChunkCharIndex {chunk: "from a", byte_offset: 11, char_offset: 11}, ChunkCharIndex {chunk: "document", byte_offset: 18, char_offset: 18}], chunks);
Trait Implementations§
Auto Trait Implementations§
impl<Sizer> Freeze for CodeSplitter<Sizer>where
Sizer: Freeze,
impl<Sizer> RefUnwindSafe for CodeSplitter<Sizer>where
Sizer: RefUnwindSafe,
impl<Sizer> Send for CodeSplitter<Sizer>where
Sizer: Send,
impl<Sizer> Sync for CodeSplitter<Sizer>where
Sizer: Sync,
impl<Sizer> Unpin for CodeSplitter<Sizer>where
Sizer: Unpin,
impl<Sizer> UnwindSafe for CodeSplitter<Sizer>where
Sizer: UnwindSafe,
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more