fdkey
FDKEY verification primitives (Rust). Gate AI-agent access to your tools / API behind LLM-only puzzles. Companion to the TypeScript and Python SDKs at https://github.com/fdkey/sdks.
What this crate ships
The Rust MCP server ecosystem is still consolidating across multiple
community SDKs (rmcp, mcp-server-rs, tower-mcp, etc.) and there is
no single canonical Anthropic-blessed Rust MCP SDK to wrap. This crate
intentionally exposes primitives so you can plug FDKEY into whichever
MCP server library you use — or your plain HTTP service:
Verifier— bundlesJwtVerifier+VpsClient. The canonical entry point.JwtVerifier— Ed25519 JWT verify against the cached/.well-known/fdkey.json.VpsClient—POST /v1/challengeandPOST /v1/submittoapi.fdkey.com.WellKnownClient—HashMap<kid, DecodingKey>cached for 1 hour, refresh on miss.guard::{can_call, mark_verified, consume_policy}— pure per-session policy logic, identical to the TypeScript and Python SDKs.
The wire shape (challenge / submit JSON, JWT claims) matches the other SDKs exactly — the FDKEY VPS doesn't know which language called it.
Install
Get an API key at app.fdkey.com.
Usage — verify a Bearer JWT in any HTTP service
use ;
async
Usage — fetch + submit a challenge programmatically
use ;
use ChallengeMeta;
# async
Per-session policy gating
The crate exposes the same three policy variants the TS and Python SDKs support — pick the one that fits each tool / route:
use Policy;
EachCall // verify on every invocation (irreversible actions)
OncePerSession // verify once per connection (signup-style flows)
EveryMinutes // verification valid for N minutes after solve;
// does NOT extend on subsequent calls
Plug the per-session state machine the TS / Python SDKs use into your MCP server's tool-call dispatch:
use ;
let mut session = default;
let policy = EachCall;
// On every tool call:
if can_call else
// When the FDKEY submit step succeeds:
mark_verified;
Configuration reference
FdkeyConfig
Failure-mode defaults
on_vps_error: FailMode::Allow is the default — if the FDKEY scoring
service is unreachable, your dispatch should fall through to the
unprotected handler rather than rejecting traffic. We chose this so an
FDKEY outage doesn't brick integrator workflows (e.g. if we shut down
the service or api.fdkey.com is unreachable). FDKEY is verification,
not gating — your service should still serve traffic when ours is down.
Set on_vps_error: FailMode::Block if you'd rather reject unverified
callers during an outage. The crate exposes this as a config field;
the actual fail-open behavior is up to your dispatch implementation
(see "Per-session policy gating" above).
Security notes — integrator obligations
Because this crate ships primitives rather than a single framework wrapper, three protections that the TypeScript and Python sibling SDKs enforce in code are YOUR responsibility in Rust. Skipping any of them leaves your service open to abuse the wire format already protects against.
1. NEVER return the JWT to the agent
The example in "Usage — fetch + submit a challenge programmatically" above does:
let jwt = result.jwt.expect;
let claims = verifier.verify_token.await?;
After that line, the JWT is a server-side verification artifact —
discard it. Persist { verified_at, score: claims.score, tier: claims.tier } in your session store; surface only that to the agent.
If you echo the JWT back in your HTTP response or in a tool result,
you've handed the agent a bearer token it can replay against any
other FDKEY-protected service within the JWT's lifetime
(~5 min default).
The TypeScript reference at
@fdkey/http implements
the same flow correctly — its session-mediated design is the
canonical pattern. Mirror it.
2. Use UUIDs (or other non-reusable identifiers) for session keys
Don't key your session store on raw pointer addresses, Box::into_raw
casts, or any other identity that can be reused after a session is
dropped. CPython has the same problem (see the Python SDK's
_SessionKeyTracker for the parallel mitigation); in Rust the risk
is smaller (the borrow checker prevents most aliasing) but pointer
reuse is still a real foot-gun for raw-pointer-based session maps.
Generate a fresh UUID per session (uuid::Uuid::new_v4()) and store
the mapping in a structure that guarantees no two live sessions can
share the same key.
3. Bound your session store with TTL + LRU eviction
A naive HashMap<SessionId, SessionState> grows forever as agents
connect and drift away. On a long-lived multi-tenant server this is a
memory leak you'll only notice in production.
The TypeScript sibling ships InMemorySessionStore at
mcp-integration/sdks/http/src/session-store.ts with a 1-hour idle
TTL and a 10 000-entry hard cap. Port the pattern to Rust — sweep on
access, drop the oldest LRU entry on insert when at the cap, no
background timer needed. ~40 lines.
4. JWT aud is not validated by the SDK
The audience claim binds the JWT to the integrator's vps_users.id,
which the SDK doesn't know at verify time. The VPS already binds
aud to the API key that requested the challenge — defense in
depth — but in principle, a JWT issued for one FDKEY-protected
service could be replayed against a different one within the JWT
lifetime (~5 min default). Keep the JWT lifetime short on the VPS
side if your threat model includes cross-integrator replay.
Links
- Marketing + docs: https://fdkey.com
- Dashboard (sign up + manage keys): https://app.fdkey.com
- Source: https://github.com/fdkey/sdks
- Issues: https://github.com/fdkey/sdks/issues
License
MIT — see LICENSE.