Tembo Operator
Requirements
Rust Linting
Run linting with cargo fmt
and clippy
Clippy:
cargo fmt:
Running Locally
To build and run the operator locally
- Or, you can run with auto reloading your local changes.
- First, install cargo-watch
- Then, run with auto reload
Install on an existing cluster
Integration testing
To automatically set up a local cluster for functional testing, use this script.
This will start a local kind cluster, annotate the default
namespace for testing
and install the CRD definition.
Or, you can follow the below steps.
- Connect to a cluster that is safe to run the tests against
- Set your kubecontext to any namespace, and label it to indicate it is safe to run tests against this cluster (do not do against non-test clusters)
NAMESPACE=<namespace> just
- Start or install the controller you want to test (see the following sections), do this in a separate shell from where you will run the tests
export DATA_PLANE_BASEDOMAIN=localhost
cargo run
- Run the integration tests
Note: for integration tests to work you will need to be logged in on
plat-dev
via CLI under the "PowerUserAccess" role found here: https://d-9067aa6f32.awsapps.com/start (click "Command line or programmatic access")
- The integration tests assume you already have installed or are running the operator connected to the cluster.
Other testing notes
- Include the
--nocapture
flag to show print statements during test runs
Cluster
As an example; install kind
. Once installed, follow these instructions to create a kind cluster connected to a local image registry.
CRD
Apply the CRD from cached file, or pipe it from crdgen
(best if changing it):
OpenTelemetry (TDB)
Setup an OpenTelemetry Collector in your cluster. Tempo / opentelemetry-operator / grafana agent should all work out of the box. If your collector does not support grpc otlp you need to change the exporter in main.rs
.
Running
Locally
- Or, with optional telemetry (change as per requirements):
OPENTELEMETRY_ENDPOINT_URL=https://0.0.0.0:55680 RUST_LOG=info,kube=trace,controller=debug
In-cluster
Compile the controller with:
Build an image with:
Push the image to your local registry with:
Edit the deployment's image tag appropriately, then run:
NB: namespace is assumed to be default
. If you need a different namespace, you can replace default
with whatever you want in the yaml and set the namespace in your current-context to get all the commands here to work.
Usage
In either of the run scenarios, your app is listening on port 8080
, and it will observe events.
Try some of:
The reconciler will run and write the status object on every change. You should see results in the logs of the pod, or on the .status object outputs of kubectl get coredb -o yaml
.
Webapp output
The sample web server exposes some example metrics and debug information you can inspect with curl
.
# HELP cdb_controller_reconcile_duration_seconds The duration of reconcile to complete in seconds
# TYPE cdb_controller_reconcile_duration_seconds histogram
}
}
}
}
}
}
}
}
}
# HELP cdb_controller_reconciliation_errors_total reconciliation errors
# TYPE cdb_controller_reconciliation_errors_total counter
# HELP cdb_controller_reconciliations_total reconciliations
# TYPE cdb_controller_reconciliations_total counter
}
The metrics will be auto-scraped if you have a standard PodMonitor
for prometheus.io/scrape
.
Development
Updating the CRD:
-
Edit the CoreDBSpec struct as needed.
-
> just generate-crd
Connecting to Postgres locally
Start a tembo instance
kubectl apply -f yaml/sample-coredb.yaml
Get the connection password and save it as an environment variable.
Add the following line to /etc/hosts
Connect to the running Postgres instance: