How to fix weave-net CrashLoopBackOff for second node?

I have 2 VM nodes. Both see each other either by host name (via / etc / hosts) or by ip-address. One of them was equipped with a cubist as a master. Other as a working node. Following the instructions ( http://kubernetes.io/docs/getting-started-guides/kubeadm/ ), I added a weave-net. The list of containers is as follows:

vagrant@vm-master :~$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-vm-master 1/1 Running 0 3m kube-system kube-apiserver-vm-master 1/1 Running 0 5m kube-system kube-controller-manager-vm-master 1/1 Running 0 4m kube-system kube-discovery-982812725-x2j8y 1/1 Running 0 4m kube-system kube-dns-2247936740-5pu0l 3/3 Running 0 4m kube-system kube-proxy-amd64-ail86 1/1 Running 0 4m kube-system kube-proxy-amd64-oxxnc 1/1 Running 0 2m kube-system kube-scheduler-vm-master 1/1 Running 0 4m kube-system kubernetes-dashboard-1655269645-0swts 1/1 Running 0 4m kube-system weave-net-7euqt 2/2 Running 0 4m kube-system weave-net-baao6 1/2 CrashLoopBackOff 2 2m 

CrashLoopBackOff appears for each working node. I spent some of our games with network interfaces, but it seems that the network is in order. I found a similar question where the answer advised me to look in the magazines and not follow. So here are the logs:

 vagrant@vm-master :~$ kubectl logs weave-net-baao6 -c weave --namespace=kube-system 2016-10-05 10:48:01.350290 I | error contacting APIServer: Get https://100.64.0.1:443/api/v1/nodes: dial tcp 100.64.0.1:443: getsockopt: connection refused; trying with blank env vars 2016-10-05 10:48:01.351122 I | error contacting APIServer: Get http://localhost:8080/api: dial tcp [::1]:8080: getsockopt: connection refused Failed to get peers 

What am I doing wrong? Where to go from there?

+6
source share
3 answers

I also came across the same question. Weaver seems to want to connect to the Kubernetes Cluster IP address, which is virtual. Just run this to find the ip cluster: kubectl get svc . It should give you something like this:

 $ kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 100.64.0.1 <none> 443/TCP 2d 

Weaver takes this IP address and tries to connect to it, but the host nodes do not know anything about it. A simple route will solve this problem. On all work nodes, do:

 route add 100.64.0.1 gw <your real master IP> 
+5
source

this happens with a single node installation. I tried several things, such as reusing configuration and rest, but the most stable way at the moment is to completely break down (as described in the docs) and again cluster.

I use these scripts to restart the cluster:

down.sh

 #!/bin/bash systemctl stop kubelet; docker rm -f -v $(docker ps -q); find /var/lib/kubelet | xargs -n 1 findmnt -n -t tmpfs -o TARGET -T | uniq | xargs -r umount -v; rm -r -f /etc/kubernetes /var/lib/kubelet /var/lib/etcd; 

up.sh

 #!/bin/bash systemctl start kubelet kubeadm init # kubectl taint nodes --all dedicated- # single node! kubectl create -f https://git.io/weave-kube 

edit: I would also give other Pod networks like Calico if this is a weave problem

+2
source

The most common reasons for this may be: - the presence of a firewall (for example, firewalld in CentOS) - network configuration (for example, the default NAT interface on VirtualBox)

Currently, kubeadm is still alpha, and this is one of the problems that many alpha testers have already reported. We study this by documenting the most common problems, such documentation will be ready closer to the beta version.

VirtualBox + Vargant + Ansible correctly exists for the reference implementation of Ubunutu and CentOS , which provides solutions for problems with the firewall, SELinux and VirtualBox NAT.

+2
source

All Articles