Network setup for Kubernetes

I am reading the Kubernetes manual “Getting Started from Scratch” and have reached the awful Network section where it says:

Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies): * all containers can communicate with all other containers without NAT * all nodes can communicate with all containers (and vice-versa) without NAT * the IP that a container sees itself as is the same IP that others see it as 

My first source of confusion: How is this different from the “standard” Docker model? How is Docker different wrt these 3 requirements of Kubernete?

The rest of the article summarizes how GCE achieves these requirements:

For configuration scripts for the Google Compute Engine cluster, we use advanced routing to assign a subnet to each virtual machine (the default is / 24 - 254 IP addresses). Any traffic associated with this subnet will be redirected directly to the virtual machine by the GCE network. This is in addition to the “master” IP address assigned to the virtual machine that NAT uses for outbound Internet access. The linux bridge (called cbr0) is configured to exist on this subnet and the docker -bridge flag is passed.

My question here is: What requirement of paragraph 3 above has this paragraph address? More importantly, how does he achieve the requirement (s)? I think I just don’t understand how 1-subnet on VM is achieved: container-container communication, node-container communication and static IP address.


And as a bonus / stretch: why doesn't the marathon suffer from the same network problems as here, Kubernetes?

+2
docker networking subnet kubernetes
source share
1 answer

The standard Docker network configuration selects the container subnet for you from the selected defaults . As long as it does not conflict with any interfaces on your host, Docker is fine with it.

Docker then inserts the iptables MASQUERADE rule, which allows containers to talk to the outside world using the default host interface.

Kubernetes 3 requirements are violated by the fact that subnets are only selected based on the addresses used on the host, which forces the NAT request to use all container traffic using the MASQUERADE rule.

Consider the following Docker setup for 3 hosts (a bit clever to highlight things):

Host 1:

eth0 : 10.1.2.3

docker0 : 172.17.42.1/16

container-A : 172.17.42.2

Host 2:

eth0 : 10.1.2.4

docker0 : 172.17.42.1/16

container-B : 172.17.42.2

Host 3:

eth0 : 172.17.42.2

docker0 : 172.18.42.1

Say container-B wants to access the HTTP service on port 80 of container-A . You can get a docker to open container-A 80 port somewhere on host 1 . Then container-B can make a request to 10.1.2.3-00-003210. This will be received on port 80 of container-A , but will look as if it came from some random port on 10.1.2.4 due to NAT when exiting Host 2. This disrupts the interaction of all containers without NAT, and the container sees the same IP address like others. Try to access the container-A service directly from Host 2 , and your hosts can communicate with containers without violating NAT.

Now, if any of these containers wants to talk to Host 3 , they are SOL (just a common argument for being careful with docker0 subnets automatically assigned).

Kubernetes' approach to GCE / AWS / Flannel / ... is to assign each host virtual machine a subnet cut from a private private network. No subnets match the VM addresses or with each other. This allows containers and virtual machines to communicate NATlessly.

+7
source share

All Articles