Expand description
Llama Core, abbreviated as llama-core
, defines a set of APIs. Developers can utilize these APIs to build applications based on large models, such as chatbots, RAG, and more.
Re-exports§
pub use error::LlamaCoreError;
pub use graph::EngineType;
pub use graph::Graph;
pub use graph::GraphBuilder;
pub use metadata::ggml::GgmlMetadata;
pub use metadata::piper::PiperMetadata;
pub use metadata::whisper::WhisperMetadata;
pub use metadata::BaseMetadata;
Modules§
- Define APIs for audio generation, transcription, and translation.
- Define APIs for chat completion.
- Define APIs for completions.
- Define APIs for computing embeddings.
- Error types for the Llama Core library.
- Define Graph and GraphBuilder APIs for creating a new computation graph.
- Define APIs for image generation and edit.
- Define APIs for querying models.
- rag
rag
Define APIs for RAG operations. - search
search
Define APIs for web search operations. - Define utility functions.
Structs§
- Version info of the
wasi-nn_ggml
plugin, including the build number and the commit id.
Enums§
- Running mode
- The task type of the stable diffusion context
Constants§
Functions§
- Get the plugin info
- Initialize the ggml context
- Initialize the ggml context for RAG scenarios.
- Initialize the piper context
- Initialize the stable-diffusion context with the given full diffusion model
- Initialize the stable-diffusion context with the given standalone diffusion model
- Initialize the whisper context
- Return the current running mode.