kubectl config set-context prod-payment \ --cluster=prod-us-east \ --user=prod-admin \ --namespace=payment kubectl creates a new context entry named prod-payment in your kubeconfig. It does not switch to it yet (for that, you need kubectl config use-context ). Use Case 2: The "Quick Fix" (Modifying the Current Context) This is where the magic happens for daily operations. Let's say you are currently in the frontend namespace, but you need to run a database migration in the db-migration namespace. You don't want to create a permanent new context.
Now, when you run kubectl config use-context prod-payment , your terminal turns into a warning siren. Did you just modify your current context with the wrong namespace and forget what the original was? Don't panic. Kubernetes stores the original cluster and user information in the context. You can reset just the namespace: kubectl config set context
Add this to your ~/.zshrc or ~/.bashrc : Let's say you are currently in the frontend
Master this command. Alias it. Love it.
# Unset the namespace override kubectl config set-context --current --namespace= That empty string removes the namespace pinning, reverting to the default namespace defined in the original context (usually default ). A fintech engineer once spent three hours debugging why a new pod wasn't appearing. He ran kubectl get pods repeatedly. Nothing. He restarted the deployment. Nothing. He yelled at the cloud provider. Did you just modify your current context with
You run kubectl get pods . Everything looks healthy. You scale a deployment. You check the logs. Only then do you realize—you just blew up the staging environment while trying to debug production. Or worse, you deleted a critical configmap from the wrong bank of servers.
Finally, he ran kubectl config get-contexts . He was in cluster-2 (staging) but his mind was in cluster-1 (production). The pod was running perfectly. He was just looking at the wrong wall.