I once deployed a config change that crashed pods on startup. The rollout was halfway through before I noticed - half the replicas were healthy, half were in CrashLoopBackOff. kubectl rollout undo had us back to the previous version in seconds. That’s when I learnt to always watch rollouts and know how to revert quickly.
My deployment workflow Link to heading
At work, the recommended approach is GitOps (ArgoCD), so most changes go through git. But for quick debugging or emergencies:
- Check current state:
kubectl get deploy/my-app -o yaml - Make the change (scale, restart, etc.)
- Watch it roll out:
kubectl rollout status deploy/my-app - If it breaks:
kubectl rollout undo deploy/my-app
The rollback has saved me more than once. I once deployed a config change that crashed on startup - rollout undo had us back to healthy in seconds.
Restart a deployment Link to heading
Rolling restart (graceful):
kubectl rollout restart deploy/my-app
Check the status:
kubectl rollout status deploy/my-app
Undo if something goes wrong:
kubectl rollout undo deploy/my-app
Scale up or down Link to heading
kubectl scale deployment my-app --replicas=3
Scale to zero (quick way to stop everything):
kubectl scale deployment my-app --replicas=0
Scale multiple deployments:
kubectl scale deployment/app1 deployment/app2 deployment/app3 --replicas=2
Or use labels:
kubectl scale deployment -l app=backend --replicas=0
When to use which Link to heading
- rollout restart: Graceful rolling update, pods replaced one by one
- scale to 0 then back up: Kills everything immediately, use when pods are stuck or you need a clean slate
See what’s running Link to heading
kubectl get pods -o wide
The -o wide shows which node each pod is on.
When troubleshooting deployments, see kubectl debugging commands.
Further reading Link to heading
- Kubernetes deployment strategies - rolling update, recreate, and more