kube-rs
Rust client for Kubernetes with reinterpretations of the Reflector
and Informer
abstractions from the go client.
This client aims cater to the more common controller/operator case, but allows you sticking in dependencies like k8s-openapi for accurate struct representations.
Usage
See the examples directory for how to watch over resources in a simplistic way.
See controller-rs for a full example with actix.
Reflector
One of the main abstractions exposed from kube::api
is Reflector<P, U>
. This is a cache of a resource that's meant to "reflect the resource state in etcd".
It handles the api mechanics for watching kube resources, tracking resourceVersions, and using watch events; it builds and maintains an internal map.
To use it, you just feed in T
as a Spec
struct and U
as a Status
struct, which can be as complete or incomplete as you like. Here, using the complete structs via k8s-openapi:
let api = v1Pod.within;
let rf : = new
.timeout
.init?;
then you can poll()
the reflector, and read()
to get the current cached state:
rf.poll?; // watches + updates state
// read state and use it:
rf.read?.into_iter.for_each;
The reflector itself is responsible for acquiring the write lock and update the state as long as you call poll()
periodically.
Informer
The other main abstraction from kube::api
is Informer<P, U>
. This is a struct with the internal behaviour for watching kube resources, but maintains only a queue of WatchEvent
elements along with resourceVersion
.
You tell it what type parameters correspond to; T
should be a Spec
struct, and U
should be a Status
struct. Again, these can be as complete or incomplete as you like. For instance, using the complete structs from k8s-openapi:
let api = v1Pod;
let inf : = new
.init?;
The main feature of Informer<P, U>
is that after calling .poll()
you handle the events and decide what to do with them yourself:
inf.poll?; // watches + queues events
while let Some = inf.pop
How you handle them is up to you, you could build your own state, you can call a kube client, or you can simply print events. Here's a sketch of how such a handler would look:
The node_informer example has an example of using api calls from within event handlers.
Examples
Examples that show a little common flows. These all have logging of this library set up to trace
:
# watch pod events in kube-system
# watch for broken nodes
or for the reflectors:
for one based on a CRD, you need to create the CRD first:
then you can kubectl apply -f crd-baz.yaml -n kube-system
, or kubectl delete -f crd-baz.yaml -n kube-system
, or kubectl edit foos baz -n kube-system
to verify that the events are being picked up.
Timing
All watch calls have timeouts set to 10
seconds as a default (and kube always waits that long regardless of activity). If you like to hammer the API less, you can either call .poll()
less often and the events will collect on the kube side (if you don't wait too long and get a Gone). You can configure the timeout with .timeout(n)
on the Informer
or Reflector
.
License
Apache 2.0 licensed. See LICENSE for details.