Crate bevy_llm

Crate bevy_llm 

Source
Expand description

bevy_llm (minimal): a thin bevy wrapper over the llm crate.

  • re-exports llm chat/types so you don’t duplicate data models.
  • streams deltas and tool-calls as bevy events.
  • lets the llm provider manage history (via builder memory).
  • never blocks the main thread: on native we spawn onto a tiny tokio runtime (no bevy pool blocking); on wasm we use bevy’s async pool, which yields to the browser/event loop.

api docs (types & traits): https://docs.rs/llm

  • chat provider: llm::chat::ChatProvider
  • message builder/roles: llm::chat::{ChatMessage, ChatRole, MessageType}
  • streaming: llm::chat::{StreamResponse, StreamChoice, StreamDelta}
  • tools / tool calls: llm::builder::FunctionBuilder, llm::chat::ToolChoice, llm::ToolCall

Structs§

BevyLlmPlugin
bevy plugin: wires systems, events, resources. requires you to insert a Providers resource before/after adding the plugin. on native, also inserts a tiny tokio runtime resource by default.
ChatCompletedEvt
ChatDeltaEvt
ChatErrorEvt
ChatMessage
re-export the llm types so downstream code can use the same structs/enums. A single message in a chat conversation.
ChatRequest
insert this component to trigger a chat request for the session entity. the provider manages the history; you only provide the new messages.
ChatSession
attach this to an entity you want to chat with a provider.
ChatStarted
events emitted by the wrapper during/after chat.
ChatToolCallsEvt
FunctionBuilder
re-export the llm types so downstream code can use the same structs/enums. Builder for function tools
LLMBuilder
re-export the llm types so downstream code can use the same structs/enums. Builder for configuring and instantiating LLM providers.
Providers
a map of ready-to-use llm providers.
StreamChoice
re-export the llm types so downstream code can use the same structs/enums. Individual choice in a streaming response
StreamDelta
re-export the llm types so downstream code can use the same structs/enums. Delta content in a streaming response
StreamResponse
re-export the llm types so downstream code can use the same structs/enums. Stream response chunk that mimics OpenAI’s streaming response format
TokioRt
on native we keep a tiny tokio runtime to drive llm futures. we spawn onto this rt from compute tasks so neither the main thread nor bevy’s compute pools block.
ToolCall
re-export the llm types so downstream code can use the same structs/enums. Tool call represents a function call that an LLM wants to make. This is a standardized structure used across all providers.

Enums§

ChatRole
re-export the llm types so downstream code can use the same structs/enums. Role of a participant in a chat conversation.
LLMBackend
re-export the llm types so downstream code can use the same structs/enums. Supported LLM backend providers.
LLMError
re-export the llm types so downstream code can use the same structs/enums. Error types that can occur when interacting with LLM providers.
LlmSet
system ordering so uis can run after we emit events
MessageType
re-export the llm types so downstream code can use the same structs/enums. The type of a message in a chat conversation.
StreamMsg
ToolChoice
re-export the llm types so downstream code can use the same structs/enums. Tool choice determines how the LLM uses available tools. The behavior is standardized across different LLM providers.

Traits§

ChatProvider
re-export the llm types so downstream code can use the same structs/enums. Trait for providers that support chat-style interactions.
LLMProvider
re-export the llm types so downstream code can use the same structs/enums. Core trait that all LLM providers must implement, combining chat, completion and embedding capabilities into a unified interface

Functions§

send_user_text
helper to enqueue a text user message on a session entity.