BaseLanguageModel

Trait BaseLanguageModel 

Source
pub trait BaseLanguageModel: Send + Sync {
    // Required methods
    fn llm_type(&self) -> &str;
    fn model_name(&self) -> &str;
    fn config(&self) -> &LanguageModelConfig;
    fn generate_prompt<'life0, 'async_trait>(
        &'life0 self,
        prompts: Vec<LanguageModelInput>,
        stop: Option<Vec<String>>,
        callbacks: Option<Callbacks>,
    ) -> Pin<Box<dyn Future<Output = Result<LLMResult>> + Send + 'async_trait>>
       where Self: 'async_trait,
             'life0: 'async_trait;

    // Provided methods
    fn cache(&self) -> Option<&dyn BaseCache> { ... }
    fn callbacks(&self) -> Option<&Callbacks> { ... }
    fn get_ls_params(&self, stop: Option<&[String]>) -> LangSmithParams { ... }
    fn identifying_params(&self) -> HashMap<String, Value> { ... }
    fn get_token_ids(&self, text: &str) -> Vec<u32> { ... }
    fn get_num_tokens(&self, text: &str) -> usize { ... }
    fn get_num_tokens_from_messages(&self, messages: &[BaseMessage]) -> usize { ... }
}
Expand description

Abstract base trait for interfacing with language models.

All language model wrappers inherit from BaseLanguageModel. This trait provides common functionality for both chat models and traditional LLMs.

Required Methods§

Source

fn llm_type(&self) -> &str

Return the type identifier for this language model.

This is used for logging and tracing purposes.

Source

fn model_name(&self) -> &str

Get the model name/identifier.

Source

fn config(&self) -> &LanguageModelConfig

Get the configuration for this model.

Source

fn generate_prompt<'life0, 'async_trait>( &'life0 self, prompts: Vec<LanguageModelInput>, stop: Option<Vec<String>>, callbacks: Option<Callbacks>, ) -> Pin<Box<dyn Future<Output = Result<LLMResult>> + Send + 'async_trait>>
where Self: 'async_trait, 'life0: 'async_trait,

Pass a sequence of prompts to the model and return model generations.

This method should make use of batched calls for models that expose a batched API.

§Arguments
  • prompts - List of PromptValue objects.
  • stop - Stop words to use when generating.
  • callbacks - Callbacks to pass through.
§Returns

An LLMResult, which contains a list of candidate Generation objects.

Provided Methods§

Source

fn cache(&self) -> Option<&dyn BaseCache>

Get the cache for this model, if any.

Source

fn callbacks(&self) -> Option<&Callbacks>

Get the callbacks for this model.

Source

fn get_ls_params(&self, stop: Option<&[String]>) -> LangSmithParams

Get parameters for tracing/monitoring.

Source

fn identifying_params(&self) -> HashMap<String, Value>

Get the identifying parameters for this model.

Source

fn get_token_ids(&self, text: &str) -> Vec<u32>

Get the ordered IDs of tokens in a text.

§Arguments
  • text - The string input to tokenize.
§Returns

A list of token IDs.

Source

fn get_num_tokens(&self, text: &str) -> usize

Get the number of tokens present in the text.

§Arguments
  • text - The string input to tokenize.
§Returns

The number of tokens in the text.

Source

fn get_num_tokens_from_messages(&self, messages: &[BaseMessage]) -> usize

Get the number of tokens in the messages.

§Arguments
  • messages - The message inputs to tokenize.
§Returns

The sum of the number of tokens across the messages.

Implementors§