I was trying to schedule pods on specific nodes. Here’s the quick reference.

Real use cases Link to heading

Common use cases for taints include:

  • GPU nodes - gpu=true:NoSchedule so only ML workloads land there
  • Spot/preemptible nodes - cloud.google.com/gke-spot=true:NoSchedule for batch jobs that can handle interruption
  • High-memory nodes - workload=data-processing:NoSchedule for memory-intensive jobs

The pattern is: taint nodes with special characteristics, then add tolerations to pods that should use them.

Node selectors Link to heading

Simple way to pick nodes by label:

spec:
  nodeSelector:
    workload: backend

Tolerations Link to heading

Allow pods to run on tainted nodes:

spec:
  tolerations:
    - key: "dedicated"
      operator: "Equal"
      value: "backend"
      effect: "NoSchedule"

Tolerate all taints (use carefully):

tolerations:
  - operator: "Exists"

Check node taints Link to heading

kubectl describe node my-node | grep Taints

Or:

kubectl get nodes -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints

Add a taint Link to heading

kubectl taint nodes my-node dedicated=backend:NoSchedule

Remove a taint Link to heading

kubectl taint nodes my-node dedicated=backend:NoSchedule-

The - at the end removes it.

Common effects Link to heading

  • NoSchedule - don’t schedule new pods
  • PreferNoSchedule - try to avoid scheduling
  • NoExecute - evict existing pods too

Common mistakes Link to heading

  • Forgetting the toleration - Pod stuck in Pending, check kubectl describe pod for “node(s) had taint” messages
  • Wrong effect - Toleration must match the taint’s effect exactly
  • Toleration without nodeSelector - Pod can run on tainted nodes but might land on regular nodes instead. Usually want both.

Further reading Link to heading