kube-rs
Rust client for Kubernetes in the style of a more generic client-go. It makes certain assumptions about the kubernetes api to allow writing generic abstractions, and as such contains rust reinterpretations of Reflector
and Informer
to allow writing kubernetes controllers/watchers/operators easily.
You can operate entirely without openapi definitions if you are operating on a CustomResource
, but if you require the full definitions of native objects, it is easier to compile with the openapi
feature to get accurate struct representations via k8s-openapi.
NB: This library is currently undergoing a lot of changes with async/await stabilizing. Please check the CHANGELOG when upgrading.
Installation
To use the openapi generated types:
[]
= { = "0.27.0", = ["openapi"] }
= { = "0.7.1", = false, = ["v1_15"] }
otherwise:
[]
= "0.27.0"
The latter is fine in a CRD-only use case.
Usage
See the examples directory for how to watch over resources in a simplistic way. **NB: ** Running the examples rely on the non-default --features=openapi
feature flag.
See version-rs for a super light (~100 lines), actix*, prometheus, deployment api setup.
See controller-rs for a full actix* example, with circleci, and kube yaml.
NB: actix examples with futures are currently working with git/alpha dependencies.
Api
It's currently recommended to compile with the "openapi" feature if you want an easy experience with accurate native object representations:
let pods = v1Pod.within;
let p = pods.get.await?;
println!;
let patch = json!;
let patched = pods.patch.await?;
assert_eq!;
pods.delete.await?;
See the pod_openapi
or crd_openapi
examples for more uses.
Informer
The main abstraction from kube::runtime
is Informer<K>
. This is a struct with the internal behaviour for watching kube resources, but maintains only a queue of WatchEvent
elements along with the last seen resourceVersion
.
You tell it what type KubeObject
implementing object you want to use. You can use Object<P, U>
to get an automatic implementation by using Object<PodSpec, PodStatus>
.`
The spec and status structs can be as complete or incomplete as you like. For instance, using the complete structs from k8s-openapi:
type Pod = ;
let api = v1Pod;
let inf = new;
The main feature of Informer<K>
is being able to subscribe to events while having a streaming .poll()
open:
let pods = inf.poll.await?.boxed; // starts a watch and returns a stream
while let Some = pods.try_next.await?
How you handle them is up to you, you could build your own state, you can call a kube client, or you can simply print events. Here's a sketch of how such a handler would look:
async
The node_informer example has an example of using api calls from within event handlers.
Reflector
The other big abstractions exposed from kube::runtime
is Reflector<K>
. This is a cache of a resource that's meant to "reflect the resource state in etcd".
It handles the api mechanics for watching kube resources, tracking resourceVersions, and using watch events; it builds and maintains an internal map.
To use it, you just feed in T
as a Spec
struct and U
as a Status
struct, which can be as complete or incomplete as you like. Here, using the complete structs via k8s-openapi's PodSpec:
let api = v1Pod.within;
let rf = new.timeout.init.await?;
then you should poll()
the reflector, and state()
to get the current cached state:
rf.poll.await?; // watches + updates state
// Clone state and do something with it
rf.state.await.into_iter.for_each;
Note that poll
holds the future for 290s by default, but you can (and should) get .state()
from another async context (see reflector examples for how to spawn an async task to do this).
If you need the details of just a single object, you can use the more efficient, Reflector::get
and Reflector::get_within
.
Examples
Examples that show a little common flows. These all have logging of this library set up to trace
. Note that most of the examples require the openapi
feature to be enabled in order to compile. The openapi feature is not on by default.
# watch pod events
# watch event events
# watch for broken nodes
or for the reflectors:
for one based on a CRD, you need to create the CRD first:
then you can kubectl apply -f crd-baz.yaml -n default
, or kubectl delete -f crd-baz.yaml -n default
, or kubectl edit foos baz -n default
to verify that the events are being picked up.
For straight API use examples, try:
NAMESPACE=dev
Raw Api
You can elide the large k8s-openapi
dependency if you only are working with Informers/Reflectors, or you are happy to supply partial or complete definitions of the native objects you are working with:
let foos = customResource
.version
.group
.within;
type Foo = ;
let rf : = raw.init.await?;
let fdata = json!;
let req = foos.create?;
let o = client..await?;
let fbaz = client..await?;
assert_eq!;
If you supply a partial definition of native objects then you can save on reflector memory usage.
The node_informer
and crd_reflector
examples uses this at the moment
, (although node_informer
is cheating by supplying k8s_openapi structs manually anyway). The crd_api
example also shows how to do it for CRDs.
Rustls
Kube has basic support for rustls as a replacement for the openssl
dependency. To use this, turn off default features, and enable rustls-tls
:
or in Cargo.toml
:
[]
= { = "0.27.0", = false, = ["openapi", "rustls-tls"] }
= { = "0.7.1", = false, = ["v1_15"] }
This will pull in the variant of reqwest
that also uses its rustls-tls
feature.
License
Apache 2.0 licensed. See LICENSE for details.