Expand description
bevy_llm (minimal): a thin bevy wrapper over the llm crate.
- re-exports
llmchat/types so you don’t duplicate data models. - streams deltas and tool-calls as bevy events.
- lets the
llmprovider manage history (via builder memory). - never blocks the main thread: on native we spawn onto a tiny tokio runtime (no bevy pool blocking); on wasm we use bevy’s async pool, which yields to the browser/event loop.
api docs (types & traits): https://docs.rs/llm
- chat provider:
llm::chat::ChatProvider - message builder/roles:
llm::chat::{ChatMessage, ChatRole, MessageType} - streaming:
llm::chat::{StreamResponse, StreamChoice, StreamDelta} - tools / tool calls:
llm::builder::FunctionBuilder,llm::chat::ToolChoice,llm::ToolCall
Structs§
- Bevy
LlmPlugin - bevy plugin: wires systems, events, resources.
requires you to insert a
Providersresource before/after adding the plugin. on native, also inserts a tiny tokio runtime resource by default. - Chat
Completed Evt - Chat
Delta Evt - Chat
Error Evt - Chat
Message - re-export the llm types so downstream code can use the same structs/enums. A single message in a chat conversation.
- Chat
Request - insert this component to trigger a chat request for the session entity. the provider manages the history; you only provide the new messages.
- Chat
Session - attach this to an entity you want to chat with a provider.
- Chat
Started - events emitted by the wrapper during/after chat.
- Chat
Tool Calls Evt - Function
Builder - re-export the llm types so downstream code can use the same structs/enums. Builder for function tools
- LLMBuilder
- re-export the llm types so downstream code can use the same structs/enums. Builder for configuring and instantiating LLM providers.
- Providers
- a map of ready-to-use
llmproviders. - Stream
Choice - re-export the llm types so downstream code can use the same structs/enums. Individual choice in a streaming response
- Stream
Delta - re-export the llm types so downstream code can use the same structs/enums. Delta content in a streaming response
- Stream
Response - re-export the llm types so downstream code can use the same structs/enums. Stream response chunk that mimics OpenAI’s streaming response format
- TokioRt
- on native we keep a tiny tokio runtime to drive
llmfutures. we spawn onto this rt from compute tasks so neither the main thread nor bevy’s compute pools block. - Tool
Call - re-export the llm types so downstream code can use the same structs/enums. Tool call represents a function call that an LLM wants to make. This is a standardized structure used across all providers.
Enums§
- Chat
Role - re-export the llm types so downstream code can use the same structs/enums. Role of a participant in a chat conversation.
- LLMBackend
- re-export the llm types so downstream code can use the same structs/enums. Supported LLM backend providers.
- LLMError
- re-export the llm types so downstream code can use the same structs/enums. Error types that can occur when interacting with LLM providers.
- LlmSet
- system ordering so uis can run after we emit events
- Message
Type - re-export the llm types so downstream code can use the same structs/enums. The type of a message in a chat conversation.
- Stream
Msg - Tool
Choice - re-export the llm types so downstream code can use the same structs/enums. Tool choice determines how the LLM uses available tools. The behavior is standardized across different LLM providers.
Traits§
- Chat
Provider - re-export the llm types so downstream code can use the same structs/enums. Trait for providers that support chat-style interactions.
- LLMProvider
- re-export the llm types so downstream code can use the same structs/enums. Core trait that all LLM providers must implement, combining chat, completion and embedding capabilities into a unified interface
Functions§
- send_
user_ text - helper to enqueue a text user message on a session entity.