blazegram
Declarative Telegram bot framework for Rust.
One screen at a time. Zero garbage in chat. Direct MTProto over persistent TCP.
Why blazegram?
| HTTP Bot API | blazegram | |
|---|---|---|
| Latency | ~50 ms per call (2 hops) | ~5 ms (direct MTProto) |
| File uploads | 50 MB limit | 2 GB |
| Connection | New HTTP per call | Persistent TCP socket |
| Message management | Manual IDs everywhere | Automatic diffing |
| Chat cleanup | You delete manually | Auto-managed |
blazegram holds a single persistent TCP socket to Telegram's datacenter via grammers MTProto — no webhook server, no middleman, no HTTP overhead.
On top of that, it introduces the Screen abstraction: declare what the user should see, and a Virtual Chat Differ computes the minimal set of API calls to get there.
Quick start
[]
= "0.4"
= { = "1", = ["full"] }
use ;
async
First launch authenticates via MTProto and creates a .session file.
Subsequent starts reconnect in under 100 ms.
Core concepts
Screens & the Differ
A Screen is a declarative snapshot of what the user should see. Call navigate() and the
differ handles everything:
callback (button press) → edit in place (1 API call)
user sent text / command → delete old + send (2–3 calls)
content identical → nothing (0 calls)
No message IDs. No "should I edit or re-send?" logic. No stale buttons lingering in chat.
# use *;
// text + keyboard
text
.keyboard
.build;
// photo with caption
builder
.photo
.caption
.keyboard
.done
.build;
// multi-message screen
builder
.text.done
.photo.caption.done
.build;
Navigation stack
push() / pop() give you a navigation stack (capped at 20 levels) —
back buttons work out of the box:
ctx.push.await?; // push new screen
ctx.pop.await?; // pop back
Features
🔲 Keyboards & Grids
Fluent keyboard builder with automatic row management:
.keyboard
📄 Pagination
One-liner paginated lists with navigation buttons:
let paginator = new; // 5 per page
let screen = paginated_screen;
ctx.navigate.await?;
← [2/5] → buttons auto-generated. Handles empty lists. Labels localized via i18n.
📝 Multi-step forms
Declarative form wizard with validation, type coercion, and auto-generated keyboards:
builder
.text_step
.validator
.done
.integer_step.min.max.done
.choice_step
.confirm_step
.on_complete
.build
Bad input auto-deleted, error shown as 3 s toast, cancel button built-in.
⚡ Progressive updates (streaming)
Stream edits to a single message, auto-throttled to respect Telegram rate limits. Perfect for LLM streaming, progress bars, live dashboards:
let h = ctx.progressive.await?;
h.update.await;
h.update.await;
h.finalize.await?;
If navigate() is called before finalize(), the stream is cancelled automatically — no races.
💬 Reply mode
For conversational bots (LLM wrappers, support bots) that don't need chat cleanup:
ctx.reply.await?; // sends new message
ctx.reply.await?; // edits same message
ctx.reply.await?; // edits same message
// next handler call → fresh message
User messages are not deleted. Combine with freeze_message() to keep important messages
across navigate() transitions.
💾 State management
Typed per-chat state with zero boilerplate:
// key-value
ctx.set;
let n: i32 = ctx.get.unwrap_or;
// or full typed state
let p: Profile = ctx.state;
ctx.set_state;
4 backends, same API:
| Backend | Setup | Persistence |
|---|---|---|
| In-memory | default | none |
| Memory + snapshot | .snapshot("state.bin") |
periodic flush to disk |
| redb | .redb_store("bot.redb") |
pure Rust, ACID, zero C deps |
| Redis | .redis_store("redis://...") |
multi-instance, feature redis |
🌍 i18n
FTL-based with automatic user language detection:
// locales/en.ftl: greeting = Hello, { $name }!
// locales/ru.ftl: greeting = Привет, { $name }!
let text = ctx.t_with;
// → "Hello, World!" or "Привет, World!" depending on user.language_code
Framework labels (back, next, cancel) are auto-localized.
📡 Broadcast
Mass-message all users with built-in rate limiting and optional dismiss button:
let screen = text.build;
let result = broadcast.await;
// result.sent = 1523, result.blocked = 12, result.failed = 0
🔌 Inline mode
Declarative result builders with auto-pagination:
.on_inline
🛡️ Middleware
Composable middleware chain — auth, throttle, logging, analytics:
builder
.middleware
.middleware
.middleware
.run.await;
🧪 Testing
Full test harness with MockBotApi — no network, no tokens:
async
async
Simulate any update type: text, callbacks, photos, voice, stickers, locations, payments, member joins/leaves.
💳 Payments (Stars & Fiat)
// Send invoice (Telegram Stars)
ctx.send_invoice.await?;
// Handle checkout
.on_pre_checkout
.on_successful_payment
Good to know
Unrecognized messages are deleted by default to keep the chat clean.
Disable with .delete_unrecognized(false).
Rate limiting is adaptive: global (30 rps), per-chat (1 rps private, 20/min groups),
with automatic FLOOD_WAIT retry. answer_callback_query bypasses the limiter.
Entity fallback: if HTML formatting fails, the executor automatically retries as plain text.
handler! macro eliminates Box::pin(async move { ... }) boilerplate:
handler! // commands, callbacks
handler! // on_input
form_handler! // form completion
Architecture
Handlers .command() / .callback() / .on_input()
│
▼
Ctx navigate() / push() / pop() / reply()
│
▼
Differ old msgs + new Screen → minimal ops
│
▼
Executor FLOOD_WAIT retry, entity fallback
│
▼
BotApi 70+ async methods (trait, mockable)
│
▼
grammers MTProto → Telegram DC (persistent TCP)
Per-chat mutex guarantees sequential update processing. No race conditions.
License
MIT