proto_rs
Rust-first Protobuf and gRPC. Define messages, enums, and services as native Rust types — .proto files are generated for you. No protoc, no code generation step, no conversion boilerplate.
[]
= "0.11"
Why
- Rust structs and enums are the source of truth, not
.protofiles - Zero conversion boilerplate between your domain types and the wire format
- No
protocbinary required — everything is pure Rust - Single-pass reverse encoder that avoids length precomputation
- Wire-compatible* (with regular, protobuf specification compatable rust types) with Prost and any standard Protobuf implementation
proto_rs vs Prost
Workflow
| proto_rs | Prost | |
|---|---|---|
| Source of truth | Rust structs and enums | .proto files |
| Rust codegen source | Derive macros at compile time | protoc? + prost-build in build.rs |
.proto files |
Auto-generated from Rust (opt-in) | Written by hand, required |
| Type conversions | Zero boilerplate — native types encode directly | Manual From/Into between generated and domain types |
| External tooling | None | Requires? protoc binary |
| Tonic integration | Built-in codec, zero-copy responses | Separate tonic-build step |
| Custom types | sun / sun_ir shadow system |
Not supported — hand-written wrappers |
Performance
proto_rs uses a single-pass reverse encoder (upb-style). Fields are written payload-first, then prefixed with tags and lengths — no two-pass measure-then-write like Prost's encoded_len() + encode().
Encoding and decoding throughput is on par with Prost as far I managed to test it with totally unscientific benches.
Per-field micro-benchmarks show both libraries trading wins depending on field type — proto_rs is faster on enums, nested messages, and collections; Prost edges ahead on raw bytes and strings. Overall throughput is comparable.
Zero-copy
ZeroCopy<T> pre-encodes a message once from ref. This eliminates cloning, so you can use references in RPC services
| Prost (clone + encode) | proto_rs (zero_copy) | Speedup | |
|---|---|---|---|
| Complex message | 122K ops/s | 246K ops/s | 2.01x |
Quick start
use ;
use DecodeContext;
let user = User ;
let bytes = encode_to_vec;
let decoded = decode.unwrap;
assert_eq!;
The #[proto_message] macro derives encoding, decoding, and .proto schema generation. Tags are assigned automatically but can be overridden with #[proto(tag = N)].
Table of contents
- proto_rs vs Prost
- Messages
- Enums
- Field attributes
- Transparent wrappers
- Generics
- Custom type conversions (sun)
- Zero-copy IR encoding (sun_ir)
- Getters
- Validation
- RPC services
- Zero-copy encoding
- Built-in type support
- Wrapper types
- Third-party integrations
- Schema registry and emission
- Feature flags
- Stable toolchain
- Benchmarks
Messages
Structs become Protobuf messages. Fields map to proto fields with auto-assigned tags.
Generated .proto:
message Order {
uint64 id = 1;
string item = 2;
uint32 quantity = 3;
optional string notes = 4;
repeated string tags = 5;
map<string, string> metadata = 6;
}
Nested messages work naturally:
Enums
Rust enums map to Protobuf oneof. Unit variants, tuple variants, and struct variants are all supported.
Generated .proto:
message Event {
oneof value {
EventPing ping = 1;
string message = 2;
EventTransfer transfer = 3;
EventBatch batch = 4;
EventOptional optional = 5;
}
}
message EventPing {}
message EventTransfer {
uint64 from = 1;
uint64 to = 2;
uint64 amount = 3;
}
message EventBatch {
repeated uint64 ids = 1;
repeated string labels = 2;
}
message EventOptional {
optional uint64 id = 1;
optional string note = 2;
}
Field attributes
#[proto(tag = N)]
Override auto-assigned field tag:
#[proto(skip)] and #[proto(skip = "fn_path")]
Skip a field during encoding. With a function, the field is recomputed on decode:
#[proto(treat_as = "Type")]
Encode a field using a different type's wire format. Useful for type aliases:
pub type ComplexMap = BTreeMap;
#[proto(into)], #[proto(into_fn)], #[proto(from_fn)], #[proto(try_from_fn)]
Custom field-level type conversions:
use ;
Use try_from_fn when the conversion can fail (the error type must implement Into<DecodeError>).
#[proto(import_path = "package")]
Optional hint for live .proto emission — tells the emitter which package to import for an external type. The build-schema system resolves all imports automatically, so this is only needed when using emit-proto-files or PROTO_EMIT_FILE=1:
Transparent wrappers
Single-field newtypes can be encoded without additional message framing:
;
The wrapper encodes/decodes as the inner type directly — no extra tag overhead on the wire.
Generics
Generic structs work out of the box:
// Encodes/decodes like any other message
let pair = Pair ;
let bytes = encode_to_vec;
For .proto generation with concrete type substitution:
Custom type conversions (sun)
The sun attribute maps native Rust types to a proto shadow struct. The shadow handles encoding/decoding while your domain type stays clean.
use ;
// Your domain type — not proto-aware
// The proto shadow — handles wire format
A single shadow can serve multiple domain types:
// Implement ProtoShadowEncode + ProtoShadowDecode for each sun target
Zero-copy IR encoding (sun_ir)
For types with expensive-to-clone fields, sun_ir provides a reference-based intermediate representation that avoids cloning during encoding:
use ;
use Address;
use ;
// IR struct holds references — no cloning on encode
// Encode path: borrows everything, zero allocations
// Decode path: moves owned data
Getters
When the IR struct's fields don't map 1:1 to the proto struct, use getter to specify how to access values from the IR:
The $ refers to the IR struct instance. Same-name, same-type fields are resolved automatically without a getter.
Validation
Validate fields or entire messages on decode:
Field validators run after each field is decoded. Message validators run after all fields are decoded. Both return Result<(), DecodeError>.
With the tonic feature, validator_with_ext gives access to tonic::Extensions for request-scoped validation:
RPC services
Define gRPC services as Rust traits. The macro generates Tonic server and client implementations:
The macro parses your trait methods and generates both the server trait and client struct. Return types are flexible — you can use or omit Result and Response wrappers depending on what makes sense semantically:
use ;
use ;
The macro unwraps Result, Response, Box, Arc, and ZeroCopy layers automatically to determine the proto message type for the generated .proto definition — you get clean trait signatures without affecting the wire format.
Server implementation
;
// Start server
builder
.add_service
.serve
.await?;
Generated client
The generated client methods accept any type that implements ProtoRequest<T> — not just Request<T>. This means you can pass:
- Bare values:
client.echo(Ping { id: 1 })— auto-wrapped inRequest - Wrapped requests:
client.echo(Request::new(Ping { id: 1 }))— passed through - Zero-copy:
client.echo(ping.to_zero_copy())— sent as pre-encoded bytes
let mut client = connect.await?;
// All three are equivalent — pass whatever is convenient:
let r1 = client.echo.await?;
let r2 = client.echo.await?;
let r3 = client.echo.await?;
The generated method signature is:
pub async
This generic bound is what makes all three call styles work — ProtoRequest<T> is implemented for T, Request<T>, ZeroCopy<T>, and Request<ZeroCopy<T>>.
RPC imports
Optional import hints for live .proto emission. The build-schema system resolves all imports automatically — #[proto_imports] is only needed when using emit-proto-files or PROTO_EMIT_FILE=1:
// Optional: only needed for live .proto emission
RPC client interceptors
rpc_client_ctx adds a generic Ctx parameter to the generated client, enabling per-request middleware (auth tokens, tracing headers, rate limiting, etc.).
Define an interceptor trait:
Wire it to the service:
The generated client becomes SecureServiceClient<T, Ctx> where Ctx: AuthInterceptor. Each method gains an extra first parameter for the interceptor payload, with the bound I: Into<Ctx::Payload>:
// Generated signature:
pub async
This means you can pass any type that converts into the payload — not just the payload type itself:
;
let mut client: =
connect.await?;
// Pass a String directly:
client.protected.await?;
// Or anything that implements Into<String>:
client.protected.await?;
Multiple services can share the same interceptor trait with different concrete implementations
Zero-copy encoding
Pre-encode a message and reuse the bytes:
use ;
let msg = Pong ;
let zc: = to_zero_copy;
// Access raw bytes without re-encoding
let bytes: & = zc.as_bytes;
// Use in Tonic responses — sent without re-encoding
Ok
Built-in type support
Primitives
bool, u8, u16, u32, u64, i8, i16, i32, i64, f32, f64, usize, isize, String, Vec<u8>, bytes::Bytes
Narrow types (u8, u16, i8, i16) are widened on the wire to uint32/int32 with overflow validation on decode.
Atomics
All std::sync::atomic types: AtomicBool, AtomicU8, AtomicU16, AtomicU32, AtomicU64, AtomicUsize, AtomicI8, AtomicI16, AtomicI32, AtomicI64, AtomicIsize
Atomic types encode and decode using Ordering::Relaxed.
NonZero types
All core::num::NonZero* types: NonZeroU8, NonZeroU16, NonZeroU32, NonZeroU64, NonZeroUsize, NonZeroI8, NonZeroI16, NonZeroI32, NonZeroI64, NonZeroIsize
Default value is MAX (not zero). Decoding zero returns an error.
Collections
Vec<T>, VecDeque<T>, [T; N], HashMap<K, V>, BTreeMap<K, V>, HashSet<T>, BTreeSet<T>
Smart pointers
Box<T>, Arc<T>, Option<T>
Unit type
() maps to google.protobuf.Empty.
Wrapper types
Feature-gated wrapper types are encoded transparently:
| Type | Feature | Description |
|---|---|---|
ArcSwap<T> |
arc_swap |
Lock-free atomic pointer |
ArcSwapOption<T> |
arc_swap |
Optional atomic pointer |
CachePadded<T> |
cache_padded |
Cache-line aligned value |
parking_lot::Mutex<T> |
parking_lot |
Fast mutex |
parking_lot::RwLock<T> |
parking_lot |
Fast read-write lock |
std::sync::Mutex<T> |
(always) | Standard mutex |
papaya::HashMap<K,V> |
papaya |
Lock-free concurrent map |
papaya::HashSet<T> |
papaya |
Lock-free concurrent set |
use ArcSwap;
use Arc;
Third-party integrations
Chrono (chrono feature)
DateTime<Utc> and TimeDelta encode as (i64 secs, u32 nanos):
Fastnum (fastnum feature)
D128, D64, and UD128 encode as split integer components:
Solana (solana feature)
Native support for Solana SDK types:
| Type | Proto representation |
|---|---|
Address |
bytes (32 bytes) |
Signature |
bytes (64 bytes) |
Hash |
bytes (32 bytes) |
Keypair |
bytes |
Instruction |
message with Address program_id, repeated AccountMeta, bytes data |
AccountMeta |
message with Address pubkey, bool is_signer, bool is_writable |
InstructionError |
oneof with all error variants |
TransactionError |
oneof with all error variants |
Instruction uses sun_ir for zero-copy encoding — account lists and data are borrowed, not cloned.
Teloxide (teloxide feature)
teloxide_core::types::UserId is supported as a primitive.
Hashers (ahash feature)
ahash::RandomState and std::hash::RandomState are supported for HashMap/HashSet construction.
Schema registry and emission
proto_rs includes a build system that collects all proto schemas at compile time using the inventory crate. Every #[proto_message] and #[proto_rpc] macro invocation automatically registers its schema. write_all() gathers all registered schemas across your entire workspace (and from whole dependency tree!) and generates two outputs:
.protofiles — valid proto3 definitions with resolved imports and package structure- Rust client module — optional generated Rust code with
#[proto_message]/#[proto_rpc]attributes, ready for use by downstream consumers who depend on proto_rs but don't have access to your original types
Emitting .proto files
Proto files are written only when explicitly enabled:
- Cargo feature:
emit-proto-files - Environment variable:
PROTO_EMIT_FILE=1(overrides the feature flag) - Disable override:
PROTO_EMIT_FILE=0
Build-time schema collection
With the build-schemas feature, collect all proto schemas across your workspace and write them to disk:
use ;
Rust client generation
RustClientCtx controls whether and how a Rust client module is generated alongside .proto files. The generated module mirrors your proto package hierarchy as nested Rust pub mod blocks, with each type annotated by #[proto_message] or #[proto_rpc].
use ;
Import substitution (with_imports)
When you provide imports, the build system auto-substitutes matching types in the generated client. If a generated type name matches an imported type, it is replaced with the import — the struct definition is omitted and all references use the imported path instead.
This is useful when consumers already have types (like fastnum::UD128, solana_address::Address, or chrono::DateTime) and want the generated client to reference those directly rather than re-generating wrapper structs.
let ctx = enabled
.with_imports;
Before (without import): the build system generates a pub struct UD128 { ... } in the client.
After (with import): the client emits use fastnum::UD128; and references UD128 directly — no struct generated.
Aliased imports are also supported:
.with_imports
Module type attributes (type_attribute)
Apply #[derive(...)] or other attributes to all types within a module:
let ctx = enabled
.type_attribute
.type_attribute;
Duplicate derive entries are automatically merged — the above produces a single #[derive(Clone, Debug, PartialEq)] on every type in goon_types.
Per-type and per-field attributes (add_client_attrs, remove_type_attribute)
Add or remove attributes on individual types, fields, or RPC methods:
use ;
let ctx = enabled
// Add an attribute to a specific type
.add_client_attrs
// Add an attribute to a specific field
.add_client_attrs
// Add a module-level attribute
.add_client_attrs
// Remove a specific derive from a type (e.g., remove Clone from BuildRequest)
.remove_type_attribute;
Type replacement (replace_type)
Replace types in the generated client — useful for substituting proto types with domain-specific types in struct fields or RPC method signatures:
use ;
let ctx = enabled
.replace_type;
Custom statements (with_statements)
Inject arbitrary Rust statements into a specific module:
let ctx = enabled
.with_statements;
Split module output (split_module)
For large codebases, split specific modules into separate files instead of bundling everything into a single output file:
let ctx = enabled
.split_module;
The atomic_types module is written to src/client_atomic_types.rs and excluded from the main src/client.rs. All other modules remain in the main output.
Type handling in generated output
The build system automatically handles special Rust types when generating client code:
| Rust source type | Proto output | Rust client output |
|---|---|---|
AtomicBool |
bool |
bool |
AtomicU8, AtomicU16, AtomicU32 |
uint32 |
u8, u16, u32 |
AtomicU64, AtomicUsize |
uint64 |
u64 |
AtomicI8, AtomicI16, AtomicI32 |
int32 |
i8, i16, i32 |
AtomicI64, AtomicIsize |
int64 |
i64 |
NonZeroU8, NonZeroU16, NonZeroU32 |
uint32 |
::core::num::NonZeroU8, etc. |
NonZeroU64, NonZeroUsize |
uint64 |
::core::num::NonZeroU64 |
NonZeroI8, NonZeroI16, NonZeroI32 |
int32 |
::core::num::NonZeroI8, etc. |
NonZeroI64, NonZeroIsize |
int64 |
::core::num::NonZeroI64 |
Mutex<T>, Arc<T>, Box<T> |
inner type | inner type (unwrapped) |
Vec<T>, VecDeque<T> |
repeated T |
Vec<T> |
HashMap<K,V>, BTreeMap<K,V> |
map<K,V> |
HashMap<K,V> |
Option<T> |
optional T |
Option<T> |
Atomic types are unwrapped to their inner primitives (they are a runtime concern). NonZero types preserve their NonZero semantics in the Rust client since the non-zero constraint is meaningful for downstream consumers.
Macro import tracking
The build system tracks which macros each module actually uses and emits only the necessary imports. Modules containing only structs/enums import proto_message; modules with only services import proto_rpc; modules with both import both. No #[allow(unused_imports)] suppression is needed.
Custom proto definitions
#[proto_dump] emits standalone proto definitions. inject_proto_import! adds import hints to generated .proto files. Both are optional — the build-schema system resolves all imports automatically. These are only needed when using live .proto emission (emit-proto-files or PROTO_EMIT_FILE=1):
inject_proto_import!;
Feature flags
| Feature | Default | Description |
|---|---|---|
tonic |
yes | Tonic gRPC integration: codecs, service/client generation |
stable |
no | Compile on stable Rust (boxes async futures) |
build-schemas |
no | Compile-time schema registry via inventory |
emit-proto-files |
no | Write .proto files during compilation |
chrono |
no | DateTime<Utc>, TimeDelta support |
fastnum |
no | D128, D64, UD128 decimal support |
solana |
no | Solana SDK types (Address, Instruction, errors, etc.) |
solana_address_hash |
no | Solana address hasher support |
teloxide |
no | Telegram bot types |
ahash |
no | AHash hasher for collections |
arc_swap |
no | ArcSwap<T> wrapper |
cache_padded |
no | CachePadded<T> wrapper |
parking_lot |
no | parking_lot::Mutex<T>, RwLock<T> |
papaya |
no | Lock-free concurrent HashMap/HashSet |
block_razor |
no | Block Razor RPC integration |
jito |
no | Jito RPC integration |
bloxroute |
no | Bloxroute RPC integration |
next_block |
no | NextBlock RPC integration |
no-recursion-limit |
no | Disable decode recursion depth checking |
Stable toolchain
The crate defaults to nightly for impl Trait in associated types, giving zero-cost futures in generated RPC services. Enable the stable feature to compile on stable Rust — this boxes async futures (one allocation per RPC call) but keeps the API identical:
[]
= { = "0.11", = ["stable"] }
Reverse encoding
The encoder writes in a single reverse pass (upb-style). Fields are emitted payload-first, then prefixed with lengths and tags. This avoids precomputing message sizes and produces deterministic output. The RevWriter trait powers this:
TAG == 0encodes a root payload with no field key or length prefixTAG != 0prefixes the field key (and length for length-delimited payloads)- Fields and repeated elements are emitted in reverse order
RevWriter::finish_tight()returns the buffer without slack
Benchmarks
The Criterion harness under benches/bench_runner includes zero-copy vs clone comparisons and encode/decode micro-benchmarks against Prost.
Testing
The test suite covers codec roundtrips, cross-library compatibility with Prost, RPC integration, validation, and every supported type.
License
MIT OR Apache-2.0