Struct Bloom

Source
pub struct Bloom { /* private fields */ }
Expand description

The BLOOM model. Ref: Introducing BLOOM

§Safety

This implements Send and Sync as it is immutable after construction.

Implementations§

Source§

impl Bloom

Source

pub fn load( path: &Path, params: ModelParameters, load_progress_callback: impl FnMut(LoadProgress), ) -> Result<Bloom, LoadError>

Load a BLOOM model from the path and configure it per the params. The status of the loading process will be reported through load_progress_callback. This is a helper function on top of llm_base::load.

Trait Implementations§

Source§

impl KnownModel for Bloom

Source§

fn vocabulary(&self) -> &Vocabulary

Returns the vocabulary used by this model.

Source§

type Hyperparameters = Hyperparameters

Hyperparameters for the model
Source§

fn new<E: Error>( hyperparameters: Self::Hyperparameters, params: ModelParameters, vocabulary: Vocabulary, tensor_loader: impl TensorLoader<E>, ) -> Result<Self, E>

Creates a new model from the provided ModelParameters hyperparameters. This function is called by the load function.
Source§

fn start_session(&self, config: InferenceSessionConfig) -> InferenceSession

Starts a new InferenceSession for this model.
Source§

fn evaluate( &self, session: &mut InferenceSession, params: &InferenceParameters, input_tokens: &[TokenId], output_request: &mut OutputRequest, )

This function is called by the provided InferenceSession; it will use this model and the InferenceParameters to generate output by evaluating the input_tokens. The OutputRequest is used to specify additional data to fetch from the model.
Source§

fn n_context_tokens(&self) -> usize

Get the context size (configured with ModelParameters::n_context_tokens) used by this model.
Source§

fn bot_token_id(&self) -> Option<TokenId>

Get the beginning of text/beginning of string token ID, if available. This value is defined by model implementers.
Source§

fn eot_token_id(&self) -> TokenId

Get the end of text/end of string token ID. This value is defined by model implementers.
Source§

fn inference_parameters(&self) -> &InferenceParameters

Get the default InferenceParameters for this model (used by InferenceSession::infer). This value is configured through ModelParameters::inference_parameters.
Source§

impl Send for Bloom

Source§

impl Sync for Bloom

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<H, M> Model for M
where H: Hyperparameters, M: KnownModel<Hyperparameters = H>,

Source§

fn start_session(&self, config: InferenceSessionConfig) -> InferenceSession

Starts a new InferenceSession for this model.
Source§

fn evaluate( &self, session: &mut InferenceSession, params: &InferenceParameters, input_tokens: &[i32], output_request: &mut OutputRequest, )

This function is called by the provided InferenceSession; it will use this model and the InferenceParameters to generate output by evaluating the input_tokens. The OutputRequest is used to specify additional data to fetch from the model.
Source§

fn vocabulary(&self) -> &Vocabulary

Get the vocabulary (loaded from the GGML file) for this model.
Source§

fn n_context_tokens(&self) -> usize

Get the context size (configured with ModelParameters::n_context_tokens) used by this model.
Source§

fn bot_token_id(&self) -> Option<i32>

Get the beginning of text/beginning of string token ID, if available. This value is defined by model implementers.
Source§

fn eot_token_id(&self) -> i32

Get the end of text/end of string token ID. This value is defined by model implementers.
Source§

fn inference_parameters(&self) -> &InferenceParameters

Get the default InferenceParameters for this model (used by InferenceSession::infer). This value is configured through ModelParameters::inference_parameters.
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

Source§

fn vzip(self) -> V