pub struct OpenAITokenCounter { /* private fields */ }Expand description
Token counter for OpenAI GPT models using tiktoken.
Provides exact token counts for OpenAI models. Automatically selects the correct tokenizer based on the model name.
§Supported Models
| Model | Tokenizer | Context Window |
|---|---|---|
| gpt-4-turbo | cl100k_base | 128K |
| gpt-4 | cl100k_base | 8K |
| gpt-3.5-turbo | cl100k_base | 16K |
| o1-* | o200k_base | 200K |
§Example
use multi_llm::{OpenAITokenCounter, TokenCounter};
let counter = OpenAITokenCounter::new("gpt-4")?;
let tokens = counter.count_tokens("Hello, world!")?;Implementations§
Trait Implementations§
Source§impl Debug for OpenAITokenCounter
impl Debug for OpenAITokenCounter
Source§impl TokenCounter for OpenAITokenCounter
impl TokenCounter for OpenAITokenCounter
Source§fn count_message_tokens(&self, messages: &[Value]) -> LlmResult<u32>
fn count_message_tokens(&self, messages: &[Value]) -> LlmResult<u32>
Count tokens in a list of messages (includes formatting overhead). Read more
Source§fn max_context_tokens(&self) -> u32
fn max_context_tokens(&self) -> u32
Get the maximum context window size for this tokenizer.
Auto Trait Implementations§
impl Freeze for OpenAITokenCounter
impl RefUnwindSafe for OpenAITokenCounter
impl Send for OpenAITokenCounter
impl Sync for OpenAITokenCounter
impl Unpin for OpenAITokenCounter
impl UnwindSafe for OpenAITokenCounter
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more