pub struct Plan {Show 15 fields
pub base: f64,
pub with_merge: f64,
pub with_headroom: f64,
pub buffer_total: f64,
pub total_cluster: f64,
pub per_node: f64,
pub disk_per_node: f64,
pub target_utilization: f64,
pub nodes: u32,
pub primaries: u32,
pub replicas: u32,
pub shard_size_gb: f64,
pub overhead_merge: f64,
pub headroom: f64,
pub buffer_per_node_gb: Option<f64>,
}Expand description
Represents the computed capacity plan for an Elasticsearch cluster.
All values are expressed in gigabytes (GB, base-10). This struct is returned by the capacity calculation function and provides both cluster-level and per-node estimates.
Fields§
§base: f64Total data size for all primary and replica shards combined.
Formula: primaries * shard_size_gb * (1 + replicas)
with_merge: f64Base size plus Lucene merge overhead.
Formula: base * (1 + overhead_merge)
with_headroom: f64Size after applying headroom for watermarks and ingestion bursts.
Formula: with_merge * (1 + headroom)
buffer_total: f64Total relocation/rebalancing buffer for all nodes combined.
Formula: buffer_per_node_gb * nodes
total_cluster: f64Total cluster disk requirement, including overhead, headroom, and buffer.
Formula: with_headroom + buffer_total
per_node: f64Recommended data size per node, averaged across the cluster.
Formula: total_cluster / nodes
disk_per_node: f64Recommended physical disk size per node to stay below the target utilization.
Formula: per_node / target_utilization
target_utilization: f64Target maximum disk utilization ratio (e.g. 0.75 = 75%).
nodes: u32Number of data nodes in the cluster.
primaries: u32Total number of primary shards.
replicas: u32Number of replica shards per primary.
shard_size_gb: f64Average shard size in GB (base-10).
overhead_merge: f64Merge overhead fraction (e.g. 0.2 = 20%).
headroom: f64Headroom fraction (e.g. 0.3 = 30%).
buffer_per_node_gb: Option<f64>Optional relocation buffer per node in GB (defaults to shard size if None).