Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Samod
samod is a library for building collaborative applications which work offlne
and don't require servers (though servers certainly can be useful). This is
achieved by representing data as automerge
documents. samod is wire compatible with the automerge-repo JavaScript library.
What does all that mean?
samod helps you manage automerge "documents", which are hierarchical data
structures composed of maps, lists, text, and primitive values - a little
like JSON. Every change you make to a document is recorded and you can move
back and forth through the history of a document - it's a bit like Git for
JSON. samod takes care of storing changes for you and synchronizing them
with connected peers. The interesting part is that given this very detailed
history which we never discard, we can merge documents with changes which
were made concurrently. This means that we can build applications which
allow multiple users to edit the same document without having to have all
changes go through a server.
How it all works
The library is structured around a [Repo], which talks to a [Storage]
instance and to which you can connect to other peers using
Repo::connect. Once you have a [Repo] you can
create documents using [Repo::create], or look up existing docuements using
[Repo::find]. In either case you will get back a [DocHandle] which you can
use to interact with the document.
Typically then, your workflow will look like this:
- Initialize a
Repoat application startup, passing it a [RuntimeHandle] implementation and [Storage] implementation - Whenever you have connections available (maybe you are connecting to a
sync server, maybe you are receiving peer-to-peer connections) you call
[
Repo::connect] to drive the connection state. - Create
DocHandles usingRepo::createand look up existing documents usingRepo::find - Modify documents using
DocHandle::with_document
Let's walk through each of those steps.
Initializing a [Repo]
To initialize a [Repo] you call [Repo::builder()] to obtain a
[RepoBuilder] which you use to configure the repo before calling [RepoBuilder::load()]
to actually load the repository. For example:
#
# block_on
The first argument to builder is an implementation of [RuntimeHandle].
Default implementations are provided for tokio and gio which can be
conveniently used via [Repo::build_tokio] and [Repo::build_gio]
respectively. The [RuntimeHandle] trait is straightforward to implement if
you want to use some other async runtime.
By default samod uses an in-memory storage implementation. This is great
for prototyping but in most cases you do actually want to persist data somewhere.
In this case you'll need an implementation of [Storage] to pass to
[RepoBuilder::with_storage]
It is possible to use [Storage] and [AnnouncePolicy] implementations which
do not produce Send futures. In this case you will also need a runtime which
can spawn non-Send futures. See the runtimes section for more
details.
Connecting to peers
Once you have a Repo you can connect it to peers using [Repo::connect].
This method returns a future which must be driven to completion to run the
connection. Here's an example where we use futures::channel::mpsc as the
transport. We create two repos and connect them with these channels.
#
use ConnDirection;
use ;
use Infallible;
block_on;
If you are using tokio and connecting to something like a TCP socket you
can use [Repo::connect_tokio_io], which reduces some boilerplate:
#
# async
If you are connecting to JavaScript sync server using WebSockets you can enable the
tungstenite feature and use [Repo::connect_websocket]. If you are accepting
websocket connections in an axum server you can use [Repo::accept_axum].
Managing Documents
Once you have a [Repo] you can use it to manage [DocHandle]s. A
[DocHandle] represents an [automerge] document which the [Repo]
is managing. "managing" here means a few things:
- Any changes made to the document using [
DocHandle::with_document] will be persisted to storage and synchronized with connected peers (subject to the [AnnouncePolicy]). - Any changes received from connected peers will be applied to the
document and made visible to the application. You can listen for
these changes using [
DocHandle::changes].
To create a new document you use [Repo::create] which will return
once the document has been persisted to storage. To look up an existing
document you use [Repo::find]. This will first look in storage, then
if the document is not found in storage it will request the document
from all connected peers (again subject to the [AnnouncePolicy]). If
any peer has the document the future returned by [Repo::find] will
resolve once we have synchronized with at least one remote peer which
has the document.
You can make changes to a document using [DocHandle::with_document].
Announce Policies
By default, samod will announce all the [DocHandle]s it is synchronizing
to all connected peers and will also send requests to any connected peers
when you call [Repo::find]. This is often not what you want. To customize
this logic you pass an implementation of [AnnouncePolicy] to
[RepoBuilder::with_announce_policy]. Note that AnnouncePolicy is implemented
for Fn(&DocumentId) -> bool so you can just pass a closure if you want.
#
# block_on;
Runtimes
[RuntimeHandle] is a trait which is intended to abstract over the various
runtimes available in the rust ecosystem. The most common runtime is tokio.
tokio is a work-stealing runtime which means that the futures spawned on it
must be [Send], so that they can be moved between threads. This means that
[RuntimeHandle::spawn] requires [Send] futures. This in turn means that
the futures returned by the [Storage] and [AnnouncePolicy] traits are
also [Send] so that they can be spawned onto the [RuntimeHandle].
In many cases though, you may have a runtime which doesn't require [Send]
futures and you may have storage and announce policy implementations which
cannot produce [Send] futures. This would often be the case in single
threaded runtimes for example. In these cases you can instead implement
[LocalRuntimeHandle], which doesn't require [Send] futures and then
you implement [LocalStorage] and [LocalAnnouncePolicy] traits for
your storage and announce policy implementations. You configure all these
things via the [RepoBuilder] struct. Once you've configured the storage
and announce policy implementations to use local variants you can then
create a local [Repo] using [RepoBuilder::load_local].
Concurrency
Typically samod will be managing many documents. One for each [DocHandle]
you retrieve via [Repo::create] or [Repo::find] but also one for any
sync messages received about a particular document from remote peers (e.g.
a sync server would have no [DocHandle]s open but would still be running
many document processes). By default document tasks will be handled on the
async runtime provided to the [RepoBuilder] but this can be undesirable.
Document operations can be compute intensive and so responsiveness may
benefit from running them on a separate thread pool. This is the purpose
of the [RepoBuilder::with_concurrency] method, which allows you to
configure how document operations are processed. If you want to use the
threadpool approach you will need to enable the threadpool feature.
Why not just Automerge?
automerge is a low level library. It provides routines for manipulating
documents in memory and an abstract data sync protocol. It does not actually
hook this up to any kind of network or storage. Most of the work involved
in doing this plumbing is straightforward, but if every application does
it themselves, we don't end up with interoperable applications. In particular
we don't end up with fungible sync servers. One of the core goals of this
library is to allow application authors to be agnostic as to where the
user synchronises data by implementing a generic network and storage layer
which all applications can use.
Example
Here's a somewhat fully featured example of using samod to manage a todo list:
#
# block_on;