How to gracefully remove node from Kubernetes?

I want to increase / decrease the number of machines in order to increase / decrease the number of nodes in my Kubernetes cluster. When I add one machine, I can successfully register it using Kubernetes; therefore, a new node is created, as expected. However, it is not clear to me how to smoothly turn off the car later. A good workflow would be:

  • Check the node associated with the machine that I am about to close as unplanned;
  • Run pod (s), which is running in node in other node (s);
  • Gracefully remove the pod (s) that is running in node;
  • Delete node.

If I understood correctly, even kubectl drain ( discussion ) does not do what I expect, since it does not start containers before removing them (it relies on a replication controller to start pods afterwards, which can lead to downtime). Did I miss something?

How to close the computer?

+32
source share
2 answers

List the nodes and get the <node-name> that you want to use or (remove from the cluster)

 kubectl get nodes 

1) First drain the assembly

 kubectl drain <node-name> 

You may need to ignore daemon sets and local data on your computer.

 kubectl drain <node-name> --ignore-daemonsets --delete-local-data 

2) Change the group of instances for nodes (only if you use cops)

 kops edit ig nodes 

Set the size of MIX and MAX to -1 Just save the file (do nothing extra)

You can still see some pods in the empty node that are associated with sets of daemons, such as a network plugin, fluentd for logs, kubedns / coredns, etc.

3) Finally, delete the node

 kubectl delete node <node-name> 

4) Fix the state for KOPS in s3:

 kops update cluster --yes 
+30
source

Raphael. kubectl drain works as you describe. There is some downtime, as if the car crashed.

Can you describe your setup? How many replicas do you have, and are you provided in such a way that you cannot handle the downtime of one replica?

+4
source

All Articles