Struct llm_chain_openai::chatgpt::Executor
source · pub struct Executor { /* private fields */ }
Expand description
The executor for the ChatGPT model. This executor uses the async_openai
crate to communicate with the OpenAI API.
Implementations§
Trait Implementations§
source§impl Executor for Executor
impl Executor for Executor
type Step = Step
type Output = CreateChatCompletionResponse
type Token = usize
fn execute<'life0, 'async_trait>( &'life0 self, input: <<Executor as Executor>::Step as Step>::Output ) -> Pin<Box<dyn Future<Output = Self::Output> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait,
fn apply_output_to_parameters( parameters: Parameters, output: &Self::Output ) -> Parameters
fn combine_outputs(output: &Self::Output, other: &Self::Output) -> Self::Output
fn tokens_used( &self, step: &Step, parameters: &Parameters ) -> Result<TokenCount, PromptTokensError>
fn tokenize_str( &self, step: &Step, doc: &str ) -> Result<Vec<usize>, PromptTokensError>
fn to_string( &self, step: &Step, tokens: &[usize] ) -> Result<String, PromptTokensError>
fn combine_outputs_many(outputs: &[Self::Output]) -> Option<Self::Output>
Auto Trait Implementations§
impl !RefUnwindSafe for Executor
impl Send for Executor
impl Sync for Executor
impl Unpin for Executor
impl !UnwindSafe for Executor
Blanket Implementations§
source§impl<E, S, O, T> ExecutorTokenCountExt<S, O, T> for Ewhere
E: Executor<Step = S, Output = O, Token = T>,
impl<E, S, O, T> ExecutorTokenCountExt<S, O, T> for Ewhere E: Executor<Step = S, Output = O, Token = T>,
source§fn split_at_tokens(
&self,
step: &Step,
doc: &Parameters
) -> Result<(Parameters, Option<Parameters>), PromptTokensError>
fn split_at_tokens( &self, step: &Step, doc: &Parameters ) -> Result<(Parameters, Option<Parameters>), PromptTokensError>
Splits a
Parameters
object at the token limit. Read moresource§fn split_to_fit(
&self,
step: &Step,
doc: &Parameters
) -> Result<Vec<Parameters, Global>, PromptTokensError>
fn split_to_fit( &self, step: &Step, doc: &Parameters ) -> Result<Vec<Parameters, Global>, PromptTokensError>
Splits a
Parameters
object into multiple smaller Parameters
objects that fit within
the context window size supported by the given model. Read more