bevy_llm 🤖
bevy llm plugin (native + wasm). minimal wrapper over the llm crate that:
- re-exports
llmchat/types, so you don’t duplicate models - streams assistant deltas and tool-calls as Bevy events
- keeps history inside the provider (optional sliding-window memory)
- never blocks the main thread (tiny Tokio RT on native; async pool on wasm)
install crate
# or in Cargo.toml: bevy_llm = "0.2"
capabilities
- Bevy plugin with non-blocking async chat
- Structured streaming with coalesced deltas (~60hz or >=64 chars)
- Fallback to one-shot chat when streaming unsupported
- Tool-calls surfaced via
ChatToolCallsEvt - Provider-managed memory with
sliding_window_memory - Multiple providers via
Providers+ optionalChatSession.key - Native + wasm (wasm uses
gloo-net) - Helper
send_user_text()API - Built-in UI widgets
- Persisted conversation storage
- Additional backends convenience builders
usage
use *;
use ;
examples
chat: simple text streaming UI with base url / key / model fieldstool: demonstrates parsing JSON-as-text and handlingChatToolCallsEvt
run (native):
# optional env for examples
wasm is supported; integrate with your preferred bundler and target wasm32-unknown-unknown.
backends
Configured via the upstream llm crate. This plugin works great with OpenAI‑compatible servers
(set base_url to your /v1/responses endpoint). Additional convenience builders may land later.
compatible bevy versions
bevy_llm |
bevy |
|---|---|
0.2 |
0.16 |
license
licensed under either of
- Apache License, Version 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (http://opensource.org/licenses/MIT)
at your option.
contribution
unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.