Silk
A Merkle-CRDT graph engine for distributed, conflict-free knowledge graphs.
Silk is an embedded graph database with automatic conflict resolution. Built on Merkle-DAGs and CRDTs, it requires no leader, no consensus protocol, and no coordinator. Any two Silk instances that exchange sync messages are guaranteed to converge to the same graph state. Schema is enforced at write time via an ontology — not at query time.
Quick Start
Python
# Define your schema
=
# Create two independent stores (imagine different machines)
=
=
# Write to store A
# Write to store B (concurrently, no coordination)
# Sync: A sends to B
=
=
# Sync: B sends to A
=
=
# Both stores now have Alice, Bob, Acme, and the WORKS_AT edge
assert is not None
assert is not None
assert is not None
assert is not None
What just happened
The code above created two independent graph stores, wrote data to each, and synced them — both now hold the same graph. No server, no coordinator, no conflict resolution code. Here's what it looks like:
graph TB
subgraph "Before Sync"
direction LR
subgraph "Store A"
A1["alice (person)"]
A2["acme (company)"]
A1 -->|WORKS_AT| A2
end
subgraph "Store B"
B1["bob (person)"]
end
end
subgraph "After Sync — both stores identical"
direction LR
subgraph "Store A '"
A3["alice (person)"]
A4["acme (company)"]
A5["bob (person)"]
A3 -->|WORKS_AT| A4
end
subgraph "Store B '"
B3["alice (person)"]
B4["acme (company)"]
B5["bob (person)"]
B3 -->|WORKS_AT| B4
end
end
Under the hood, every write becomes a content-addressed entry in a Merkle-DAG. Sync exchanges only the entries the other side is missing:
flowchart LR
W["add_node(...)"] --> E["Entry\n{hash, op, clock, author}"]
E --> O["OpLog\n(Merkle-DAG)"]
O --> G["Materialized\nGraph"]
O --> S["Sync Protocol"]
S <-->|"offer ⇄ payload"| P["Remote Peer"]
P --> O2["Peer OpLog"]
O2 --> G2["Peer Graph"]
Rust
use ;
let ontology = from_json?;
let mut store = new;
store.add_node?;
store.add_edge?;
// Sync with a peer
let offer = store.generate_sync_offer;
let payload = peer.receive_sync_offer?;
store.merge_sync_payload?;
Features
- Ontology-enforced schema — define node types, edge types, and their properties. Silk validates required properties and type constraints at write time. Unknown properties and subtypes are accepted (D-026: open properties) — the ontology defines the minimum, not the maximum.
- Content-addressed entries — every mutation is a BLAKE3-hashed entry in a Merkle-DAG. Entries are immutable. The DAG is the audit trail.
- Per-property last-writer-wins — two concurrent writes to different properties on the same node both succeed. No data loss from non-conflicting edits.
- Delta-state sync — Bloom filter optimization minimizes data transfer. Only entries the peer doesn't have are sent.
- Graph algorithms — BFS, shortest path, impact analysis, pattern matching, topological sort, cycle detection. Built into the engine, not bolted on.
- Persistent storage — backed by redb (embedded, transactional, pure Rust). In-memory mode also available.
- Real-time subscriptions — register callbacks that fire on every graph mutation (local or merged from sync).
- Observation log — append-only, TTL-pruned time-series store for metrics alongside the graph. Same redb backend.
- Zero runtime dependencies — no Postgres, no Redis, no network required. Silk is a library, not a service.
When to Use Silk
Good fit:
- Local-first applications (offline-capable, sync when connected)
- Edge computing (devices that operate independently, sync periodically)
- Peer-to-peer systems (no central server, any node can sync with any other)
- Knowledge graphs with schema enforcement
- Multi-device sync (phone, laptop, server — all converge)
- Systems that need an audit trail (every change is a Merkle-DAG entry)
Not the right tool:
- High-throughput analytics — use DuckDB or ClickHouse
- SQL queries — use SQLite or Postgres
- Document storage — use MongoDB or CouchDB
- Blob storage — use S3
Schema Philosophy: Open Properties (D-026)
Silk's ontology defines the minimum, not the maximum. You declare node types, edge types, required properties, and type constraints. Silk enforces those. But your application can store any additional properties without changing the ontology.
=
=
# Required property "name" is enforced
# OK
# Unknown properties are accepted and stored as-is
=
assert == 30 # stored and queryable
assert ==
# Unknown subtypes are also accepted
assert ==
What stays enforced:
- Node types must be declared in the ontology
- Edge types must be declared (with source/target type constraints)
- Required properties must be present
- Known property types are validated (if
nameisstring, it must be a string)
What's open:
- Extra properties on any node or edge (stored without type validation)
- Unknown subtypes (type-level required properties still enforced)
This means your application can evolve its data model without touching the ontology or recreating the store. Add fields, add subtypes, store metadata — Silk doesn't block you.
Architecture
For the full architectural overview — research foundations (Merkle-CRDTs, Delta-state CRDTs, MAPE-K), design principles, and 26 design decisions — see DESIGN.md.
Write (add_node, add_edge, update_property)
│
▼
Entry { hash(BLAKE3), op, clock(Lamport), author, parents }
│
▼
OpLog (append-only Merkle-DAG, content-addressed)
│
├──► MaterializedGraph (live view: nodes, edges, properties)
│ └── Query: get_node, query_by_type, outgoing_edges, bfs, shortest_path
│
└──► Sync Protocol
├── generate_sync_offer() → Bloom filter of known hashes
├── receive_sync_offer() → Entries the peer is missing
└── merge_sync_payload() → Apply remote entries, re-materialize
Convergence guarantee: Two stores that have exchanged sync messages in both directions will have identical materialized graphs. This is a mathematical property of the Merkle-CRDT construction, not an implementation detail.
Benchmarks
Measured on Apple M4 Max (16 cores, 128 GB RAM), macOS 15.7, Rust 1.94.0, release build. Run cargo bench --no-default-features on your hardware. For the full analysis — what these numbers mean and why they matter — see WHY.md.
Core Operations
| Operation | Time | Throughput |
|---|---|---|
| Entry create (AddNode) | 449 ns | 2.2M ops/sec |
| Entry serialize (MessagePack) | 289 ns | 3.5M ops/sec |
| Entry deserialize | 957 ns | 1.0M ops/sec |
| BLAKE3 hash verify | 247 ns | 4.0M ops/sec |
Graph Write + Materialize
| Operation | 100 nodes | 1,000 nodes | 10,000 nodes |
|---|---|---|---|
| Add nodes (write + materialize) | 129 µs | 1.5 ms | 16.8 ms |
| Rebuild graph from entries | 20 µs | 278 µs | 2.7 ms |
Graph Algorithms
| Algorithm | 1,000 nodes | 10,000 nodes |
|---|---|---|
| BFS traversal | 564 ns | 580 ns |
| Shortest path | 706 ns | 717 ns |
| Impact analysis (reverse BFS) | 108 ns | 105 ns |
| Pattern match (2-type chain) | 555 µs | 8.1 ms |
Sync Protocol
| Scenario | Time |
|---|---|
| Sync offer (100 nodes) | 24 µs |
| Sync offer (1,000 nodes) | 282 µs |
| Sync offer (10,000 nodes) | 3.3 ms |
| Full transfer (100 nodes, zero overlap) | 111 µs |
| Full transfer (1,000 nodes, zero overlap) | 1.3 ms |
| Incremental sync (900/1000 shared, 10% delta) | 611 µs |
| Partition heal (500 divergent writes per side) | 833 µs |
Python Examples (sync scenarios)
| Scenario | Nodes | Sync time |
|---|---|---|
| Two offline peers converge | 2 x 500 | 5.1 ms |
| Three-peer partition heal | 3 x 200 | 6.6 ms |
| Concurrent property writes | 1 node | 0.06 ms |
| 10-peer ring convergence | 10 x 100 | 51.8 ms (3 rounds) |
Run the examples yourself: python examples/offline_first.py. See all four scenarios in examples/.
Design Decisions
Silk's architecture is driven by 26 explicit design decisions (D-001 through D-026), documented in full in DESIGN.md. Key choices:
| Decision | Choice | Why |
|---|---|---|
| Hash function | BLAKE3 | Fastest cryptographic hash, 128-bit security |
| Serialization | MessagePack | Compact binary, faster than JSON, schema-free |
| Storage | redb | Embedded, transactional, pure Rust, no C dependencies |
| Clock | Lamport | Sufficient for causality ordering without wall-clock sync |
| Conflict resolution | Per-property LWW | Non-conflicting concurrent writes both win |
| Sync | Delta-state + Bloom | Minimize transfer: only send what the peer lacks |
| Schema | Open properties (D-026) | Ontology is the floor, not the ceiling — unknown properties accepted |
Python API Reference
GraphStore
# Construction
= # new store
= # existing store
# Mutations
# Queries
# dict | None
# dict | None
# list[dict]
# list[dict]
# list[dict]
# list[dict]
# list[dict]
# list[dict]
# Graph algorithms
# Sync
= # bytes
= # bytes
= # int (entries merged)
= # bytes (full state)
# Subscriptions
= # callback(event_dict)
ObservationLog
=
Return Value Reference
Methods like get_node() and get_edge() return plain dicts. Here's what's inside:
Node (get_node(), all_nodes(), query_nodes_by_*):
Edge (get_edge(), all_edges(), outgoing_edges(), incoming_edges()):
Subscription callback event (subscribe(callback)):
Error Handling
Silk uses Python's built-in exception types. Error messages are descriptive but must be matched as strings in v0.1 (custom exception classes planned for v0.2).
# Validation error — bad schema, unknown type, missing required property
# "unknown node type 'spaceship'"
# I/O error — can't create or open store file
=
# Sync error — corrupted or incompatible payload
| Exception | When |
|---|---|
ValueError |
Invalid ontology, unknown node/edge type, missing required property, bad sync payload, invalid hash |
IOError |
Store file can't be created/opened, redb I/O failure |
RuntimeError |
Corrupted store (no genesis), snapshot with no entries |
Persistence
In-Memory vs Persistent
| Mode | Constructor | Durability | Use case |
|---|---|---|---|
| In-memory | GraphStore("id", ontology) |
Lost on process exit | Tests, ephemeral processing, short-lived computations |
| Persistent | GraphStore("id", ontology, path="store.redb") |
Durable (redb ACID) | Production, anything that must survive restarts |
Crash Recovery
Persistent stores use redb, which provides ACID transactions. Each write (add_node, add_edge, update_property) is committed to disk in its own transaction before the method returns.
If the process crashes:
- Completed writes are durable — they survived the crash
- In-flight writes are rolled back by redb's transaction recovery on next open
- No manual recovery needed —
GraphStore.open(path)replays the entry log and rebuilds the materialized graph
# Persistent store — survives crashes
=
# At this point, n1 is on disk. Kill -9 the process — it's safe.
# Reopen after crash
=
assert is not None # still there
Scalability
Silk keeps the full graph in memory (OpLog + MaterializedGraph). Practical limits depend on available RAM.
| Graph size | Memory (approx) | Write throughput | Full sync |
|---|---|---|---|
| 1K nodes | ~5 MB | 670K nodes/sec | 1.3 ms |
| 10K nodes | ~50 MB | 595K nodes/sec | ~13 ms |
| 100K nodes | ~500 MB | ~500K nodes/sec | ~130 ms (est.) |
Query Performance at Scale
| Method | Complexity | Safe at 100K+ |
|---|---|---|
get_node(id) |
O(1) hash lookup | Yes |
get_edge(id) |
O(1) hash lookup | Yes |
query_nodes_by_type(t) |
O(n) type index scan | Yes |
outgoing_edges(id) |
O(degree) | Yes |
bfs(start) |
O(reachable subgraph) | Yes — visits only what's connected |
shortest_path(a, b) |
O(reachable subgraph) | Yes |
all_nodes() |
O(n) — loads all into Python list | Avoid for large graphs |
pattern_match(types) |
O(n * branching^depth), capped at max_results | Yes — default limit 1000 |
Recommendations
- < 100K nodes: Silk handles this comfortably on modern hardware (< 500 MB)
- 100K–1M nodes: Works but monitor memory. Prefer targeted queries (
get_node,query_nodes_by_type) overall_nodes() - > 1M nodes: Consider sharding across multiple stores with application-level routing
Silk is designed for knowledge graphs (thousands to hundreds of thousands of richly-connected entities), not for big-data workloads (millions of rows with simple schemas). If you need the latter, use DuckDB or ClickHouse.
Tutorial: Build a Distributed Note-Taking App
A complete walkthrough showing how to use Silk for a real project — a note-taking app where notes sync across devices without a server.
1. Define the Schema
=
2. Create a Store (Persistent)
# Each device gets its own store, backed by a local file
=
3. Add Data
# Create a notebook
# Add notes
# Organize: notebook contains notes
# Tag notes
4. Query the Graph
# Get all notes in a notebook
=
=
=
# Find notes by tag — traverse TAGGED edges
=
=
# Use BFS to find all nodes connected to a notebook within 2 hops
=
5. Sync Between Devices
# Phone creates its own store
=
# Phone adds a note while offline
# Later, when connected — sync both ways
=
=
=
=
# Both devices now have all 3 notes
assert ==
6. Handle Conflicts
# Both devices edit the same note at the same time
# Also, laptop adds a tag (non-conflicting change)
# Sync — per-property LWW resolves the conflict
# "body" goes to whichever write happened later (Lamport clock)
# "priority" is non-conflicting — preserved on both sides
7. Subscribe to Changes
=
# ... any write or merge triggers the callback
This pattern — schema, local store, sync, conflict resolution — works for any domain: task managers, CRMs, inventory systems, collaborative editors, IoT dashboards.
Building from Source
# Rust tests (without Python bindings)
# Python development build
# Python tests
# Benchmarks
Documentation
| Document | What it covers |
|---|---|
| README.md | Quick start, features, API reference, tutorial |
| WHY.md | Why Silk exists, what makes it different, benchmark analysis |
| DESIGN.md | Research foundations, 26 design decisions (D-001–D-026), architecture |
| PROTOCOL.md | Sync wire format specification — for implementing peers in other languages |
| CHANGELOG.md | Release history |
| SECURITY.md | Threat model, known limitations, vulnerability reporting |
| CONTRIBUTING.md | Development setup, PR guidelines |
examples/ |
Runnable Python scenarios (offline sync, partition heal, conflicts, ring topology) |
License
Licensed under the Functional Source License, Version 1.0, Apache 2.0 Change License (FSL-1.0-Apache-2.0).
What this means:
- Free to use, modify, and distribute for any purpose that doesn't compete with silk-graph
- After 2 years from each release, the code converts to Apache License 2.0 (fully permissive)
- Internal use, learning, research, and non-competing commercial use are unrestricted
See LICENSE.md for full terms.