kubectl-view-allocations
kubectl
plugin lists allocations for resources (cpu, memory, gpu,...) as defined into the manifest of nodes and running pods. It doesn't list usage like kubectl top
. It can provide result grouped by namespaces, nodes, pods and filtered by resources'name.
Columns displayed :
Requested
: Quantity of resources requested by the container in the pod's manifest. It's the sum group by pod, namespace, node where container is running. With percentage of resources requested over what is allocatable in the group.Limit
: Quantity of resources max (limit) requestable by the container in the pod's manifest. It's the sum group by pod, namespace, node where container is running. With percentage of resources max / limit over what is allocatable in the group.Allocatable
: Allocatable resources defined (or detected) on nodes.Free
:Allocatable - max (Limit, Requested)
Utilization
: Quantity of resources (cpu & memory only) used as reported by Metrics API. It's disable by default, metrics-server is optional and should be setup into the cluster.
Install
Via binary
Download from github's release or use script
|
Via krew (kubectl plugin manager)
Via cargo
As lib in Cargo.toml
If you want to embed some function or struct of the plugin into an other rust code:
[]
= { = "0.14", = false }
[]
= ["k8s-openapi/v1_20"]
Usage
Show help
kubectl-view-allocations -h
kubectl-view-allocations 0.13.0
https://github.com/davidB/kubectl-view-allocations
kubectl plugin to list allocations (cpu, memory, gpu,... X requested, limit, allocatable,...)
USAGE:
kubectl-view-allocations [FLAGS] [OPTIONS]
FLAGS:
-h, --help Prints help information
-z, --show-zero Show lines with zero requested and zero limit and zero allocatable
-u, --utilization Retrieve utilization (for cpu and memory), require to have metrics-server
https://github.com/kubernetes-sigs/metrics-server
-V, --version Prints version information
OPTIONS:
--context <context> The name of the kubeconfig context to use
-g, --group-by <group-by>... Group information hierarchically (default: -g resource -g node -g pod)
[possible values: resource, node, pod,
namespace]
-n, --namespace <namespace> Show only pods from this namespace
-o, --output <output> Output format [default: table] [possible values: table,
csv]
-r, --resource-name <resource-name>... Filter resources shown by name(s), by default all resources are listed
Show gpu allocation
> kubectl-view-allocations -r gpu
Resource Requested Limit Allocatable Free
nvidia.com/gpu (71%) 10.0 (71%) 10.0 14.0 4.0
├─ node-gpu1 (0%) __ (0%) __ 2.0 2.0
├─ node-gpu2 (0%) __ (0%) __ 2.0 2.0
├─ node-gpu3 (100%) 2.0 (100%) 2.0 2.0 __
│ └─ fah-gpu-cpu-d29sc 2.0 2.0 __ __
├─ node-gpu4 (100%) 2.0 (100%) 2.0 2.0 __
│ └─ fah-gpu-cpu-hkg59 2.0 2.0 __ __
├─ node-gpu5 (100%) 2.0 (100%) 2.0 2.0 __
│ └─ fah-gpu-cpu-nw9fc 2.0 2.0 __ __
├─ node-gpu6 (100%) 2.0 (100%) 2.0 2.0 __
│ └─ fah-gpu-cpu-gtwsf 2.0 2.0 __ __
└─ node-gpu7 (100%) 2.0 (100%) 2.0 2.0 __
└─ fah-gpu-cpu-x7zfb 2.0 2.0 __ __
Overview only
> kubectl-view-allocations
) )
) )
) )
) )
) )
Show utilization
- Utilization information are retrieve from metrics-server (should be setup on your cluster).
- Only report cpu and memory utilization
> kubectl-view-allocations
) ) )
) ) )
)
)
) ) )
) ) )
) )
) )
Group by namespaces
> kubectl-view-allocations
) )
) )
) )
) )
) )
Show as csv
In this case value as expanded as float (with 2 decimal)
It can be combined with "group-by" options.
Alternatives & Similars
- see the discussion Need simple kubectl command to see cluster resource usage · Issue #17512 · kubernetes/kubernetes
- For CPU & Memory only
- robscott/kube-capacity: A simple CLI that provides an overview of the resource requests, limits, and utilization in a Kubernetes cluster,
- hjacobs/kube-resource-report: Report Kubernetes cluster and pod resource requests vs usage and generate static HTML
- etopeter/kubectl-view-utilization: kubectl plugin to show cluster CPU and Memory requests utilization
- For CPU & Memory utilization only
kubectl top pods
- LeastAuthority/kubetop: A top(1)-like tool for Kubernetes.