What happened:
kube-state-metrics pod goes into CrashLoopBackOff on an Oracle Kubernetes Engine (OKE) cluster running Kubernetes v1.34.2.
The container starts successfully and initializes without errors, but exits shortly after with exit code 2. Logs show normal startup messages and successful API server communication, but no explicit fatal error before termination.
What you expected to happen:
kube-state-metrics should start successfully and continue running, exposing metrics as expected without crashing.
How to reproduce it (as minimally and precisely as possible):
Deploy kube-state-metrics using standard manifests (no custom args)
kubectl apply -f https://github.com/kubernetes/kube-state-metrics/releases/download/v2.17.0/kube-state-metrics.yaml
Observe pod status
kubectl get pods -n kube-system -w
Check logs
kubectl logs
Describe pod
kubectl describe pod
Observed behavior:
Pod starts → Running briefly → crashes → enters CrashLoopBackOff
Container exits with code 2
Anything else we need to know?:
The same manifest works fine in other environments (non-OKE clusters).
No resource limits are set (so not OOM-related).
RBAC is correctly configured and verified.
Logs show:
Successful startup
In-cluster config usage
Successful communication with API server
Example log snippet:
I0427 ... Starting kube-state-metrics
I0427 ... Using all namespaces
I0427 ... Tested communication with server
No clear error is logged before exit.
Environment:
kube-state-metrics version: v2.17.0
Kubernetes version (use kubectl version):
Client Version: v1.30.14
Server Version: v1.34.2
Cloud provider or hardware configuration: Oracle Cloud Infrastructure (OKE)
Other info:
No custom flags used
No resource constraints defined
Issue reproducible consistently on this cluster
Exit code observed: 2
What happened:
kube-state-metrics pod goes into CrashLoopBackOff on an Oracle Kubernetes Engine (OKE) cluster running Kubernetes v1.34.2.
The container starts successfully and initializes without errors, but exits shortly after with exit code 2. Logs show normal startup messages and successful API server communication, but no explicit fatal error before termination.
What you expected to happen:
kube-state-metrics should start successfully and continue running, exposing metrics as expected without crashing.
How to reproduce it (as minimally and precisely as possible):
Deploy kube-state-metrics using standard manifests (no custom args)
kubectl apply -f https://github.com/kubernetes/kube-state-metrics/releases/download/v2.17.0/kube-state-metrics.yaml
Observe pod status
kubectl get pods -n kube-system -w
Check logs
kubectl logs
Describe pod
kubectl describe pod
Observed behavior:
Pod starts → Running briefly → crashes → enters CrashLoopBackOff
Container exits with code 2
Anything else we need to know?:
The same manifest works fine in other environments (non-OKE clusters).
No resource limits are set (so not OOM-related).
RBAC is correctly configured and verified.
Logs show:
Successful startup
In-cluster config usage
Successful communication with API server
Example log snippet:
I0427 ... Starting kube-state-metrics
I0427 ... Using all namespaces
I0427 ... Tested communication with server
No clear error is logged before exit.
Environment:
kube-state-metrics version: v2.17.0
Kubernetes version (use kubectl version):
Client Version: v1.30.14
Server Version: v1.34.2
Cloud provider or hardware configuration: Oracle Cloud Infrastructure (OKE)
Other info:
No custom flags used
No resource constraints defined
Issue reproducible consistently on this cluster
Exit code observed: 2