Struct llm_chain_openai::chatgpt::Executor
source · pub struct Executor { /* private fields */ }Expand description
The Executor struct for the ChatGPT model. This executor uses the async_openai crate to communicate with the OpenAI API.
Implementations§
source§impl Executor
impl Executor
sourcepub fn for_client(
client: Client,
per_invocation_options: Option<PerInvocation>
) -> Self
pub fn for_client( client: Client, per_invocation_options: Option<PerInvocation> ) -> Self
Creates a new Executor with the given client.
Trait Implementations§
source§impl Executor for Executor
impl Executor for Executor
source§fn max_tokens_allowed(&self, opts: Option<&PerInvocation>) -> i32
fn max_tokens_allowed(&self, opts: Option<&PerInvocation>) -> i32
Get the context size from the model or return default context size
§type PerInvocationOptions = PerInvocation
type PerInvocationOptions = PerInvocation
The per-invocation options type used by this executor. These are the options you can send to each step.
§type PerExecutorOptions = PerExecutor
type PerExecutorOptions = PerExecutor
The per-executor options type used by this executor. These are the options you can send to the executor and can’t be set per step.
type StepTokenizer<'a> = OpenAITokenizer
type TextSplitter<'a> = OpenAITextSplitter
source§fn new_with_options(
executor_options: Option<Self::PerExecutorOptions>,
invocation_options: Option<Self::PerInvocationOptions>
) -> Result<Self, ExecutorCreationError>
fn new_with_options( executor_options: Option<Self::PerExecutorOptions>, invocation_options: Option<Self::PerInvocationOptions> ) -> Result<Self, ExecutorCreationError>
Create a new executor with the given executor options and invocation options. If you don’t need to set any options, you can use the
new method instead. Read morefn execute<'life0, 'life1, 'life2, 'async_trait>( &'life0 self, opts: Option<&'life1 PerInvocation>, prompt: &'life2 Prompt ) -> Pin<Box<dyn Future<Output = Result<Self::Output, Self::Error>> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait, 'life1: 'async_trait, 'life2: 'async_trait,
source§fn tokens_used(
&self,
opts: Option<&PerInvocation>,
prompt: &Prompt
) -> Result<TokenCount, PromptTokensError>
fn tokens_used( &self, opts: Option<&PerInvocation>, prompt: &Prompt ) -> Result<TokenCount, PromptTokensError>
Calculates the number of tokens used by the step given a set of parameters. Read more
source§fn get_tokenizer(
&self,
options: Option<&PerInvocation>
) -> Result<OpenAITokenizer, TokenizerError>
fn get_tokenizer( &self, options: Option<&PerInvocation> ) -> Result<OpenAITokenizer, TokenizerError>
Creates a tokenizer, depending on the model used by
step. Read moresource§fn get_text_splitter(
&self,
options: Option<&PerInvocation>
) -> Result<Self::TextSplitter<'_>, Self::Error>
fn get_text_splitter( &self, options: Option<&PerInvocation> ) -> Result<Self::TextSplitter<'_>, Self::Error>
Creates a text splitter, depending on the model used by ‘step’ Read more
fn new() -> Result<Self, ExecutorCreationError>
Auto Trait Implementations§
impl !RefUnwindSafe for Executor
impl Send for Executor
impl Sync for Executor
impl Unpin for Executor
impl !UnwindSafe for Executor
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
source§impl<E, O, T, N> ExecutorTokenCountExt<O, T, N> for Ewhere
E: Executor<Output = O, Token = T>,
T: Clone,
impl<E, O, T, N> ExecutorTokenCountExt<O, T, N> for Ewhere E: Executor<Output = O, Token = T>, T: Clone,
source§fn split_to_fit(
&self,
step: &Step<Self>,
doc: &Parameters,
chunk_overlap: Option<usize>
) -> Result<Vec<Parameters, Global>, PromptTokensError>
fn split_to_fit( &self, step: &Step<Self>, doc: &Parameters, chunk_overlap: Option<usize> ) -> Result<Vec<Parameters, Global>, PromptTokensError>
Splits a
Parameters object into multiple smaller Parameters objects that fit within
the context window size supported by the given model. Read more