leeca_proxmox 0.3.0

A modern, safe, and async-first SDK for interacting with Proxmox Virtual Environment servers
Documentation

Leeca Proxmox VE SDK

Rust SDK for interacting with Proxmox Virtual Environment servers

CI Status Coverage Crates.io Downloads Docs Deps [License] MSRV Security

A modern, safe, and async‑first SDK for interacting with Proxmox Virtual Environment servers.

📋 Table of Contents

✨ Features

  • 🔒 Secure by default
    TLS 1.3, optional certificate validation, token‑based authentication.

  • ⚙️ Configurable validation
    Password strength, DNS resolution, reserved usernames – all opt‑in, off by default.

  • 🧱 Clean architecture
    Domain‑driven design with value objects, clear separation of concerns.

  • Async/await
    Built on Tokio for high concurrency.

  • 🧾 Error handling
    Detailed, type‑safe errors with backtraces.

🚀 Getting Started

Prerequisites

  • Rust
  • Cargo
  • Tokio runtime

Installation

Add the dependency to your Cargo.toml:

cargo add leeca_proxmox

Or edit Cargo.toml manually:

[dependencies]
leeca_proxmox = "0.3"
tokio = { version = "1", features = ["full"] }

📖 Usage

Basic authentication example:

use leeca_proxmox::{ProxmoxClient, ProxmoxResult};

#[tokio::main]
async fn main() -> ProxmoxResult<()> {
    let mut client = ProxmoxClient::builder()
        .host("192.168.1.100")
        .port(8006)
        .credentials("leeca", "password", "pam")
        .secure(true)                     // HTTPS (default)
        .accept_invalid_certs(false)      // reject invalid certificates (default)
        .build()
        .await?;

    client.login().await?;
    println!("Authenticated! Ticket: {}", client.auth_token().unwrap().as_str());

    Ok(())
}

See the authentication example for a complete demonstration.

Enabling extra validation

By default, only basic format checks are performed. To enable additional checks:

let client = ProxmoxClient::builder()
    .host("...")
    .credentials("user", "pass", "pam")
    .enable_password_strength(3)          // require zxcvbn score ≥ 3
    .enable_dns_resolution()               // verify hostname resolves
    .block_reserved_usernames()            // reject root, admin, etc.
    .build()
    .await?;

Session Persistence

You can save the current authentication session to a file and reload it later, avoiding the need to log in again as long as the tokens are still valid.

let mut client = ProxmoxClient::builder()
    .host("192.168.1.100")
    .port(8006)
    .credentials("leeca", "password", "pam")
    .secure(true)
    .accept_invalid_certs(false)
    .build()
    .await?;

client.login().await?;

// Save session to a file
client.save_session_to_file("proxmox-session.json").await?;

// Later, create a new client and load the session
let mut new_client = ProxmoxClient::builder()
    .host("192.168.1.100")
    .port(8006)
    .credentials("dummy", "dummy", "pam") // credentials are still required but won't be used
    .secure(true)
    .accept_invalid_certs(false)
    .with_session(std::fs::File::open("proxmox-session.json")?)
    .await?
    .build()
    .await?;

// The new client is already authenticated
assert!(new_client.is_authenticated().await);

The session data contains the ticket and CSRF token with their creation timestamps. It is serialized as JSON. You should store it securely (e.g., encrypted at rest) because it grants access to the Proxmox API.

See the session_persistence example for a complete demonstration.

Discovering Cluster Resources

Once authenticated, you can retrieve a unified list of all resources in the cluster – including VMs, containers, storage, and nodes – using the cluster_resources() method. This is particularly useful for discovering which nodes contain specific VMs before performing node‑level operations.

let resources = client.cluster_resources().await?;
for resource in resources {
    match resource {
        ClusterResource::Qemu(vm) => {
            println!(
                "VM {} (ID: {}) on node {} is {}",
                vm.common.name.as_deref().unwrap_or("(unnamed)"),
                vm.vmid,
                vm.common.node,
                vm.common.status
            );
        }
        ClusterResource::Lxc(ct) => {
            println!(
                "Container {} (ID: {}) on node {} is {}",
                ct.common.name.as_deref().unwrap_or("(unnamed)"),
                ct.vmid,
                ct.common.node,
                ct.common.status
            );
        }
        ClusterResource::Storage(st) => {
            println!(
                "Storage '{}' on node {} ({} type) is {}",
                st.storage, st.common.node, st.storage_type, st.common.status
            );
        }
        ClusterResource::Node(node) => {
            println!(
                "Node {} is {} (load: {:?})",
                node.common.node, node.common.status, node.loadavg
            );
        }
    }
}

The method returns a Vec where each variant contains both common fields (like node, id, name, status) and type‑specific fields (e.g., vmid for VMs, storage for storage). This allows you to programmatically inspect your Proxmox infrastructure without hard‑coding node names.

See the cluster_resources example for a complete demonstration.

Node Management

Once authenticated, you can inspect the nodes in your cluster:

// List all nodes
let nodes = client.nodes().await?;
for node in nodes {
    println!("Node: {} (status: {})", node.node, node.status);
}

// Get detailed status of a specific node
let status = client.node_status("pve1").await?;
println!("CPU: {:.2}%, IO Delay: {:.2}%", 
    status.cpu * 100.0, 
    status.wait.unwrap_or(0.0) * 100.0
);

// Get DNS configuration
let dns = client.node_dns("pve1").await?;
println!("DNS servers: {:?}", dns.servers);

See the node_management example for a complete demonstration.

VM Management

After authentication, you can manage QEMU virtual machines on any node:

// List all VMs on a node
let vms = client.vms("pve1").await?;
for vm in vms {
    println!("{} ({}): {}", vm.name, vm.vmid, vm.status);
}

// Get detailed status
let status = client.vm_status("pve1", 100).await?;
println!("CPU: {:.2}%", status.cpu.unwrap_or(0.0) * 100.0);

// Start a VM
let task = client.start_vm("pve1", 100).await?;
println!("Task ID: {}", task);

// Create a new VM
let params = CreateVmParams {
    vmid: 200,
    name: "my-vm".to_string(),
    memory: Some(4096),
    cores: Some(2),
    ..Default::default()
};
let task = client.create_vm("pve1", &params).await?;

See the vm_operations example for a complete demonstration.

See the examples directory for more.

🛠️ Development

# Install development dependencies
cargo install cargo-llvm-cov cargo-audit

# Run tests
cargo test --all-features

# Check code coverage
cargo llvm-cov --all-features --lcov --output-path lcov.info

# Run security audit
cargo audit

# Run linters
cargo clippy --all-targets --all-features
cargo fmt --all -- --check

📊 Project Status

See our CHANGELOG for version history and ROADMAP for future plans.

📚 Documentation

🛡️ Security

See our Security Policy for reporting vulnerabilities.

📄 License

Licensed under Apache License 2.0 – see the LICENSE file for details.

🤝 Contributing

We welcome contributions! Please see our Contributing Guide for details.

⚖️ Code of Conduct

Please read and follow our Code of Conduct.

👥 Community

📈 Versioning

This project follows Semantic Versioning. See our CHANGELOG for version history.

⚠️ Note: APIs may change before 1.0.0.

🙏 Acknowledgments

  • Proxmox VE team for their excellent API documentation.
  • Rust community for the tools and crates.
  • All contributors.

Built with ❤️ by 4rkh4m and the Rust community.

⭐ Star · 🐛 Report Bug · ✨ Request Feature · 🛡️ Security Report