TLDR: My debugging order is describe pod → logs -p → get events → exec. Most issues are in the events or previous logs.
Essential kubectl commands for debugging.
My debugging workflow Link to heading
When something’s broken, I check in this order:
kubectl describe pod/my-pod- Shows events, restart count, and why it failedkubectl logs my-pod -p- Previous container logs (the-pis crucial)kubectl get events --sort-by='.lastTimestamp'- Cluster-wide eventskubectl exec -it my-pod -- sh- Only if I need to poke around inside
The most common issues I find: OOMKilled (check events), image pull failures (check events), config errors (check logs), and permission issues (check logs).
exec into containers Link to heading
Get a shell inside a running container:
kubectl exec -it my-pod -- bash
If bash isn’t available, try sh:
kubectl exec -it my-pod -- sh
For multi-container pods, specify which one:
kubectl exec -it my-pod -c my-container -- bash
Run a single command without a shell:
kubectl exec my-pod -- ls /app
kubectl exec my-pod -- cat /etc/config/settings.yaml
port-forward Link to heading
Access a pod’s port locally:
kubectl port-forward deployment/my-app 8080:8080
Forward to a service:
kubectl port-forward svc/my-service 8080:80
events Link to heading
See what’s happening in the cluster:
kubectl get events --sort-by='.lastTimestamp'
Find OOMKilled pods:
kubectl get events --field-selector reason=OOMKilling
Watch in real time:
kubectl get events --watch
Common events: OOMKilled, FailedScheduling, FailedMount, Unhealthy, BackOff.
See also: viewing pod logs and monitoring with watch and top.
Tips Link to heading
- Use
--to separate kubectl args from the command in exec - Alpine-based images usually only have
sh, notbash - Events are cleaned up after about an hour, so check them soon after something goes wrong
Further reading Link to heading
- kubectl cheat sheet - official quick reference