Struct llm_chain_openai::chatgpt::Executor
source · pub struct Executor { /* private fields */ }
Expand description
The Executor
struct for the ChatGPT model. This executor uses the async_openai
crate to communicate with the OpenAI API.
Implementations§
source§impl Executor
impl Executor
sourcepub fn for_client(
client: Client,
per_invocation_options: Option<PerInvocation>
) -> Self
pub fn for_client( client: Client, per_invocation_options: Option<PerInvocation> ) -> Self
Creates a new Executor
with the given client.
Trait Implementations§
source§impl Executor for Executor
impl Executor for Executor
source§fn max_tokens_allowed(&self, step: &Step<Self>) -> i32
fn max_tokens_allowed(&self, step: &Step<Self>) -> i32
Get the context size from the model or return default context size
§type PerInvocationOptions = PerInvocation
type PerInvocationOptions = PerInvocation
The per-invocation options type used by this executor. These are the options you can send to each step.
§type PerExecutorOptions = PerExecutor
type PerExecutorOptions = PerExecutor
The per-executor options type used by this executor. These are the options you can send to the executor and can’t be set per step.
type StepTokenizer<'a> = OpenAITokenizer
type TextSplitter<'a> = OpenAITextSplitter
source§fn new_with_options(
executor_options: Option<Self::PerExecutorOptions>,
invocation_options: Option<Self::PerInvocationOptions>
) -> Result<Self, ExecutorCreationError>
fn new_with_options( executor_options: Option<Self::PerExecutorOptions>, invocation_options: Option<Self::PerInvocationOptions> ) -> Result<Self, ExecutorCreationError>
Create a new executor with the given executor options and invocation options. If you don’t need to set any options, you can use the
new
method instead. Read moresource§fn execute<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
step: &'life1 Step<Self>,
parameters: &'life2 Parameters
) -> Pin<Box<dyn Future<Output = Result<Self::Output, Self::Error>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
fn execute<'life0, 'life1, 'life2, 'async_trait>( &'life0 self, step: &'life1 Step<Self>, parameters: &'life2 Parameters ) -> Pin<Box<dyn Future<Output = Result<Self::Output, Self::Error>> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait, 'life1: 'async_trait, 'life2: 'async_trait,
Executes the given input and returns the resulting output. Read more
source§fn tokens_used(
&self,
step: &Step<Self>,
parameters: &Parameters
) -> Result<TokenCount, PromptTokensError>
fn tokens_used( &self, step: &Step<Self>, parameters: &Parameters ) -> Result<TokenCount, PromptTokensError>
Calculates the number of tokens used by the step given a set of parameters. Read more
source§fn get_tokenizer(
&self,
step: &Step<Self>
) -> Result<OpenAITokenizer, TokenizerError>
fn get_tokenizer( &self, step: &Step<Self> ) -> Result<OpenAITokenizer, TokenizerError>
Creates a tokenizer, depending on the model used by
step
. Read moresource§fn get_text_splitter(
&self,
step: &Step<Self>
) -> Result<Self::TextSplitter<'_>, Self::Error>
fn get_text_splitter( &self, step: &Step<Self> ) -> Result<Self::TextSplitter<'_>, Self::Error>
Creates a text splitter, depending on the model used by ‘step’ Read more
fn new() -> Result<Self, ExecutorCreationError>
source§fn new_with_default() -> Self
fn new_with_default() -> Self
👎Deprecated since 0.7.0: Use new() instead, this call has an unsafe unwrap
Auto Trait Implementations§
impl !RefUnwindSafe for Executor
impl Send for Executor
impl Sync for Executor
impl Unpin for Executor
impl !UnwindSafe for Executor
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
source§impl<E, O, T, N> ExecutorTokenCountExt<O, T, N> for Ewhere
E: Executor<Output = O, Token = T>,
T: Clone,
impl<E, O, T, N> ExecutorTokenCountExt<O, T, N> for Ewhere E: Executor<Output = O, Token = T>, T: Clone,
source§fn split_to_fit(
&self,
step: &Step<Self>,
doc: &Parameters,
chunk_overlap: Option<usize>
) -> Result<Vec<Parameters, Global>, PromptTokensError>
fn split_to_fit( &self, step: &Step<Self>, doc: &Parameters, chunk_overlap: Option<usize> ) -> Result<Vec<Parameters, Global>, PromptTokensError>
Splits a
Parameters
object into multiple smaller Parameters
objects that fit within
the context window size supported by the given model. Read more