Crate llm_weaver
source ·Expand description
Flexible library developed for creating and managing coherent narratives which leverage LLMs (Large Language Models) to generate dynamic responses.
Built based on OpenAI’s recommended tactics, LLM Weaver facilitates extended interactions with any LLM, seamlessly handling conversations that exceed a model’s maximum context token limitation.
Loom
is the core of this library. It prompts the configured LLM and stores the message
history as TapestryFragment
instances. This trait is highly configurable through the
Config
trait to support a wide range of use cases.
§Nomenclature
- Tapestry: A collection of
TapestryFragment
instances. - TapestryFragment: A single part of a conversation containing a list of messages along with other metadata.
- ContextMessage: Represents a single message in a
TapestryFragment
instance. - Loom: The machine that drives all of the core methods that should be used across any service that needs to prompt LLM and receive a response.
- LLM: Language Model.
§Architecture
Please refer to the architecture::Diagram
for a visual representation of the core
components of this library.
§Usage
You must implement the Config
trait, which defines the necessary types and methods needed by
Loom
.
This library uses Redis as the default storage backend for storing TapestryFragment
. It is
expected that a Redis instance is running and that the following environment variables are set:
REDIS_PROTOCOL
REDIS_HOST
REDIS_PORT
REDIS_PASSWORD
Should there be a need to integrate a distinct storage backend, you have the flexibility to
create a custom handler by implementing the TapestryChestHandler
trait and injecting it
into the Config::Chest
associated type.
Re-exports§
pub use storage::TapestryChestHandler;
Modules§
Structs§
- An
u8
constrained to be in the rangeMIN..=MAX
. - Context message that represent a single message in a
TapestryFragment
instance. - Represents a single part of a conversation containing a list of messages along with other metadata.
Traits§
- A trait consisting of the main configuration needed to implement
Loom
. - The machine that drives all of the core methods that should be used across any service that needs to prompt LLM and receive a response.
- Abstraction trait for redis command abstractions.
- Represents a unique identifier for any arbitrary entity.
- Used to convert a value into one or multiple redis argument strings. Most values will produce exactly one item but in some cases it might make sense to produce more than one.