Struct groupcache::Groupcache
source · pub struct Groupcache<Value: ValueBounds>(/* private fields */);
Expand description
Contains most of the library API.
It is an Arc
wrapper around GroupcacheInner
which implements the API,
so that applications don’t have to wrap groupcache inside Arc
themselves
in concurrent context which is the target audience.
In order for groupcache peers to discover themselves application author needs to hook in some service discovery:
- static IP addresses of hosts running groupcache
- consul
- kubernetes API server
- …
Integration of crate::ServiceDiscovery
with groupcache is done via:
Groupcache::set_peers
- preferred for pull-based service discovery,
There are also:
Groupcache::add_peer
andGroupcache::remove_peer
- preferred for push-based service discovery.
There is an example showing how to use kubernetes API server for service discovery with groupcache - see here.
Implementations§
source§impl<Value: ValueBounds> Groupcache<Value>
impl<Value: ValueBounds> Groupcache<Value>
sourcepub fn builder(
me: GroupcachePeer,
loader: impl ValueLoader<Value = Value> + 'static
) -> GroupcacheBuilder<Value>
pub fn builder( me: GroupcachePeer, loader: impl ValueLoader<Value = Value> + 'static ) -> GroupcacheBuilder<Value>
In order to construct Groupcache
application needs to provide:
GroupcachePeer
- necessary for routingValueLoader
implementation
sourcepub async fn get(&self, key: &str) -> Result<Value, GroupcacheError>
pub async fn get(&self, key: &str) -> Result<Value, GroupcacheError>
Provided a given key
groupcache attempts to figure out which peer owns
a given KV pair based on consistent hashing
and make sure that only this peer handles loading values for this particular key
unless it’s hot
value which may be cached locally.
If KV is found in local hot_cache
, cached value is returned.
If KV is owned by this peer and value is cached in main_cache
it is returned.
If KV is owned by this peer it is loaded via ValueLoader
.
If KV is owned by a different peer, gRPC request is made to this peer using address provided in Groupcache::add_peer
and that peer is responsible for loading that value in a replicated set of processes.
If a request to peer fails, peer tries to load value locally via ValueLoader
.
If loading value via ValueLoader
fails an error is returned.
Groupcache coordinates cache fills such that only one load in one process of an entire replicated set of processes populates the cache, then multiplexes the loaded value to all callers.
Caches can be customized via [Options
].
sourcepub async fn remove(&self, key: &str) -> Result<(), GroupcacheError>
pub async fn remove(&self, key: &str) -> Result<(), GroupcacheError>
Original groupcache library only provided Groupcache::get
but there are use-cases
where KV pairs need to be updated but this is problematic to do in a distributed system based on consistent hashing.
This library does a simple thing:
- remove method removes KV pair from
main_cache
of the owner of the KV pair - and removes KV pair from
hot_cache
of this node.
However removed KV pair may still be cached on other nodes in hot_cache
.
To deal with this application can either:
- accept that there might be some stale values served from
hot_cache
for some time after call toremove
. - tweak
hot_cache
in such a way that it is acceptable for the application. For example disabling it entirely so that value is cached only on the owner node of KV pair. Note that this will likely increase number of RPCs over the network since all requests will have to go to the owner.
sourcepub async fn set_peers(
&self,
peers: HashSet<GroupcachePeer>
) -> Result<(), GroupcacheError>
pub async fn set_peers( &self, peers: HashSet<GroupcachePeer> ) -> Result<(), GroupcacheError>
service-discovery:
Once in a while groupcache backend should refresh view of groupcache nodes to make sure that groupcache routes traffic evenly to all healthy nodes.
This method can be used to notify groupcache about all peers in the cluster. Groupcache will figure out:
- which are new and - will try to open a connection to these peers
- which it already knew about - will keep connection open as is.
- which it knew about but are now missing - will disconnect with such peers
Instead of using this method directly in a loop,
applications can implement crate::ServiceDiscovery
and only provide implementation fetching state of the cluster.
If it isn’t possible to connect to some GroupcachePeer
s
this method will return an error with all the peers it failed to connect to
and won’t update routing table with these peers.
It will however update its routing table accordingly with peers that it successfully connected with.
Note that Groupcache::set_peers
isn’t broadcasted to other peers,
and each groupcache peer needs to update its routing table via the same call.
In other words this only updates local routing table, not routing table of all nodes in the cluster.
sourcepub async fn add_peer(
&self,
peer: GroupcachePeer
) -> Result<(), GroupcacheError>
pub async fn add_peer( &self, peer: GroupcachePeer ) -> Result<(), GroupcacheError>
service-discovery:
whenever application notices that there is new groupcache peer it should notify groupcache so that routing table/consistent hashing ring can be updated.
If it isn’t possible to connect to GroupcachePeer
this method will return an error and won’t update the routing table.
Upon success some portion of requests will be forwarded to this peer.
Note that Groupcache::add_peer
isn’t broadcasted to other peers,
and each groupcache peer needs to update its routing table via the same call.
In other words this only updates local routing table, not routing table of all nodes in the cluster.
sourcepub async fn remove_peer(
&self,
peer: GroupcachePeer
) -> Result<(), GroupcacheError>
pub async fn remove_peer( &self, peer: GroupcachePeer ) -> Result<(), GroupcacheError>
service-discovery:
whenever application notices that a groupcache peer is no longer able to serve requests because:
- it is down
- the server is not healthy
- it has been moved to a different address via container orchestrator
- it isn’t reachable. so that routing table/consistent hashing ring can be updated.
Requests will no longer be forwarded to this peer.
Note that Groupcache::remove_peer
isn’t broadcasted to other peers,
and each groupcache peer needs to update its routing table via the same call.
In other words this only updates local routing table, not routing table of all nodes in the cluster.
sourcepub fn grpc_service(&self) -> GroupcacheServer<GroupcacheInner<Value>>
pub fn grpc_service(&self) -> GroupcacheServer<GroupcacheInner<Value>>
Retrieves underlying groupcache gRPC server implementation.
Library doesn’t start gRPC server automatically, it’s instead responsibility of an application to do so. It is done this way to allow for customisations (tracing, metrics etc), see examples.
sourcepub fn addr(&self) -> SocketAddr
pub fn addr(&self) -> SocketAddr
Returns address of this peer.
Trait Implementations§
source§impl<Value: Clone + ValueBounds> Clone for Groupcache<Value>
impl<Value: Clone + ValueBounds> Clone for Groupcache<Value>
source§fn clone(&self) -> Groupcache<Value>
fn clone(&self) -> Groupcache<Value>
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreAuto Trait Implementations§
impl<Value> !RefUnwindSafe for Groupcache<Value>
impl<Value> Send for Groupcache<Value>
impl<Value> Sync for Groupcache<Value>
impl<Value> Unpin for Groupcache<Value>
impl<Value> !UnwindSafe for Groupcache<Value>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request