Crate llm_weaver

source ·
Expand description

Flexible library developed for creating and managing coherent narratives which leverage LLMs (Large Language Models) to generate dynamic responses.

Built based on OpenAI’s recommended tactics, LLM Weaver facilitates extended interactions with any LLM, seamlessly handling conversations that exceed a model’s maximum context token limitation.

Loom is the core of this library. It prompts the configured LLM and stores the message history as TapestryFragment instances. This trait is highly configurable through the Config trait to support a wide range of use cases.

You must implement the Config trait, which defines the necessary types and methods needed by Loom.

If you are using the default implementation of Config::TapestryChest, it is expected that a Redis instance is running and that the following environment variables are set:

  • REDIS_PROTOCOL
  • REDIS_HOST
  • REDIS_PORT
  • REDIS_PASSWORD

Should there be a need to integrate a distinct storage backend, you have the flexibility to create a custom handler by implementing the TapestryChestHandler trait and injecting it into the Config::TapestryChest associated type.

Re-exports

Modules

Structs

Traits

  • A trait consisting of the main configuration needed to implement Loom.
  • The machine that drives all of the core methods that should be used across any service that needs to prompt LLM and receive a response.
  • Represents a unique identifier for any arbitrary entity.

Type Aliases