lmrc-k3s
Part of the LMRC Stack - Infrastructure-as-Code toolkit for building production-ready Rust applications
A Rust library for managing K3s Kubernetes clusters via SSH.
Features
- Simple API: Easy-to-use interface for cluster management
- Declarative Reconciliation: Define desired cluster state and automatically add/remove workers
- SSH-based: Manages remote nodes via SSH connections using the
ssh-managercrate - Async/Await: Built on Tokio for efficient async operations
- Flexible Configuration: Support for custom K3s versions, tokens, and disabled components
- Private Network Support: Handles both public and private IP configurations for cloud deployments
- Cluster Status Tracking: Query real-time cluster status from kubectl
- Graceful Node Removal: Automatically drain nodes before removal
- Idempotent Operations: Safe to run multiple times, won't reinstall if already present
- Well Documented: Comprehensive API documentation with examples
Installation
Add this to your Cargo.toml:
[]
= "0.1"
= { = "1.0", = ["full"] }
Quick Start
use K3sManager;
async
Usage Examples
Basic Cluster Setup
use K3sManager;
async
Custom K3s Version and Disabled Components
let manager = builder
.version
.token
.disable
.build;
Private Network Setup (e.g., Hetzner Cloud)
// Install master with private IP for internal communication
manager.install_master.await?;
// Join worker with private IP
manager.join_worker.await?;
Check Cluster Status
// Check if K3s is installed
if manager.is_installed.await?
Uninstall K3s
// Uninstall from master
manager.uninstall.await?;
// Uninstall from worker
manager.uninstall.await?;
Declarative Cluster Reconciliation
The library supports declarative cluster management - define your desired state and let lmrc-k3s handle the rest!
use ;
// Define desired state
let desired = DesiredClusterConfig ;
// Reconcile - will add/remove workers to match desired state
manager.reconcile.await?;
// Later, scale up by adding to desired state
desired.workers.push;
manager.reconcile.await?; // Adds worker .14
// Scale down by removing from desired state
desired.workers.retain;
manager.reconcile.await?; // Removes worker .13 gracefully
Benefits:
- Idempotent - Safe to run multiple times
- Automatic diffing - Only makes necessary changes
- Graceful removal - Drains nodes before deletion
- State-aware - Won't reinstall if already present
Examples
The repository includes several complete examples:
basic_setup.rs- Simple cluster setup with one master and two workersprivate_network.rs- Cloud deployment with private networkingcluster_status.rs- Check cluster status and node informationdeclarative_reconciliation.rs- Declarative cluster management with automatic reconciliation
Run examples with:
SSH Configuration
This crate uses the ssh-manager crate for SSH connections. Ensure you have:
- SSH access configured to your target nodes
- SSH keys set up (typically in
~/.ssh/) - Root access on target nodes (K3s installation requires root)
Authentication Methods
Public Key Authentication (Default):
// Uses ~/.ssh/id_rsa by default
let manager = builder
.token
.build;
// Or specify a custom key path
let manager = builder
.token
.ssh_key_path
.build;
Password Authentication:
let manager = builder
.token
.ssh_username
.ssh_password
.build;
Custom SSH Username:
let manager = builder
.token
.ssh_username // Default is "root"
.build;
API Overview
K3sManager
The main struct for managing K3s clusters.
Methods
Cluster Setup:
builder()- Create a new builder for K3sManagerinstall_master(server_ip, private_ip, force)- Install K3s on a master nodejoin_worker(worker_ip, master_ip, worker_private_ip)- Join a worker to the clusterdownload_kubeconfig(master_ip, output_path)- Download kubeconfig from masteruninstall(server_ip, is_master)- Uninstall K3s from a node
Declarative Management:
reconcile(desired)- Reconcile cluster to match desired configurationplan_reconciliation(desired)- Create a reconciliation plan without applyingapply_reconciliation(desired, plan)- Apply a reconciliation planremove_worker(master_ip, worker_ip)- Gracefully remove a worker node
Status & Monitoring:
is_installed(server_ip)- Check if K3s is installed on a nodeget_nodes(master_ip)- Get cluster node statusget_node_info_list(master_ip)- Get detailed node information listget_cluster_state()- Get current cluster state with node informationrefresh_cluster_state(master_ip)- Refresh cluster state from the master node
K3sManagerBuilder
Builder pattern for creating K3sManager instances.
Methods
Cluster Configuration:
version(version)- Set K3s version (default: "v1.28.5+k3s1")token(token)- Set cluster token (required)disable(components)- Set components to disable
SSH Configuration:
ssh_username(username)- Set SSH username (default: "root")ssh_key_path(path)- Set SSH private key path (default: "~/.ssh/id_rsa")ssh_password(password)- Set SSH password for password-based authentication
Build:
build()- Build the K3sManager instance
Requirements
- Rust 1.70 or later
- SSH access to target nodes
- Root privileges on target nodes
- Target nodes running a Linux distribution supported by K3s
License
Part of the LMRC Stack project. Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Acknowledgments
- Built on top of the ssh-manager crate for SSH functionality
- Uses K3s - Lightweight Kubernetes distribution