Avoiding the kubernet scheduler to run all containers in the same kubernetes node cluster

I have one cluster of kubernetes with 4 nodes and one master. I am trying to run 5 nginx modules in all nodes. Currently, sometimes the scheduler runs all containers on one machine, and sometimes on another machine.

What happens if my node goes down and all my containers work in the same node? We need to avoid this.

How to make the scheduler launch containers on nodes in a circular way, so if any node is omitted, then at least one node must have the NGINX module in working mode.

Is this possible or not? If possible, how can we achieve this scenario?

+5
source share
3 answers

The scheduler should distribute your containers if your containers ask for a resource for the amount of memory and the CPU they need. See http://kubernetes.io/docs/user-guide/compute-resources/

+2
source

I think the inter-pod anti-affinity function will help you. Inter-pod anti-affinity allows you to limit which nodes of your module are allowed to plan based on labels on containers that are already running on node. Here is an example.

apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: run: nginx-service name: nginx-service spec: replicas: 3 selector: matchLabels: run: nginx-service template: metadata: labels: service-type: nginx spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: service-type operator: In values: - nginx topologyKey: kubernetes.io/hostname containers: - name: nginx-service image: nginx:latest 

Note I am using preferredDuringSchedulingIgnoredDuringExecution as you have more containers than nodes.

For more information, you can refer to the partial binding and anti-proximity (beta function) part of the following link: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/

0
source

Use podAntiAfinity

Ref: Kubernets in action Chapter 16. Advanced planning

PodAntiAfinity with requiredDuringSchedulingIgnoredDuringExecution can be used to prevent the same module from being scheduled on the same host name. If you prefer a more relaxed constraint, use preferredDuringSchedulingIgnoredDuringExecution .

 apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx spec: replicas: 5 template: metadata: labels: app: nginx spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: <---- hard requirement not to schedule "nginx" pod if already one scheduled. - topologyKey: kubernetes.io/hostname <---- Anti affinity scope is host labelSelector: matchLabels: app: nginx container: image: nginx:latest 

Kubelet - max-pods

You can specify the maximum number of containers for node in the kubelet configuration so that in the node (s) down script this will prevent K8S from being saturated with other nodes using packages from the failed node.

0
source

All Articles