Kubernetes Unable to set volumes for timeout container

I am trying to install an NFS volume for my modules, but without success.

I have a server on which the nfs mount point is installed when I try to connect to it from another running server

sudo mount -t nfs -o proto=tcp,port=2049 10.0.0.4:/export /mnt works fine

Another thing worth mentioning is that I remove the volume from the deployment and the program works. I enter it and I can telnet to 10.0.0.4 with ports 111 and 2049 successfully. therefore, it really does not seem that there are any communication problems.

and:

 showmount -e 10.0.0.4 Export list for 10.0.0.4: /export/drive 10.0.0.0/16 /export 10.0.0.0/16 

Therefore, I can assume that there is no problem with the network or configuration between the server and the client (I use Amazon, and the server I tested on is in the same security group as the k8s minions)

PS: Server is a simple ubuntu-> 50gb drive

Kubernetes v1.3.4

So I'm starting to create my PV

 apiVersion: v1 kind: PersistentVolume metadata: name: nfs spec: capacity: storage: 50Gi accessModes: - ReadWriteMany nfs: server: 10.0.0.4 path: "/export" 

And my pvc

 kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nfs-claim spec: accessModes: - ReadWriteMany resources: requests: storage: 50Gi 

this is how kubectl describes it:

  Name: nfs Labels: <none> Status: Bound Claim: default/nfs-claim Reclaim Policy: Retain Access Modes: RWX Capacity: 50Gi Message: Source: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 10.0.0.4 Path: /export ReadOnly: false No events. 

and

  Name: nfs-claim Namespace: default Status: Bound Volume: nfs Labels: <none> Capacity: 0 Access Modes: No events. 

pod deployment:

  apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mypod labels: name: mypod spec: replicas: 1 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 type: RollingUpdate template: metadata: name: mypod labels: # Important: these labels need to match the selector above, the api server enforces this constraint name: mypod spec: containers: - name: abcd image: irrelevant to the question ports: - containerPort: 80 env: - name: hello value: world volumeMounts: - mountPath: "/mnt" name: nfs volumes: - name: nfs persistentVolumeClaim: claimName: nfs-claim 

When I deploy my POD, I get the following:

 Volumes: nfs: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: nfs-claim ReadOnly: false default-token-6pd57: Type: Secret (a volume populated by a Secret) SecretName: default-token-6pd57 QoS Tier: BestEffort Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 13m 13m 1 {default-scheduler } Normal Scheduled Successfully assigned xxx-2140451452-hjeki to ip-10-0-0-157.us-west-2.compute.internal 11m 7s 6 {kubelet ip-10-0-0-157.us-west-2.compute.internal} Warning FailedMount Unable to mount volumes for pod "xxx-2140451452-hjeki_default(93ca148d-6475-11e6-9c49-065c8a90faf1)": timeout expired waiting for volumes to attach/mount for pod "xxx-2140451452-hjeki"/"default". list of unattached/unmounted volumes=[nfs] 11m 7s 6 {kubelet ip-10-0-0-157.us-west-2.compute.internal} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "xxx-2140451452-hjeki"/"default". list of unattached/unmounted volumes=[nfs] 

Tried everything I know and everything I can come up with. What am I missing here or is something wrong?

+5
source share
1 answer

I tested version 1.3.4 and 1.3.5 Kubernetes and the NFS mount did not work for me. Later I switched to 1.2.5, and this version gave me more detailed information (kubectl describe pod ...). It turned out that the image of the hypercube is missing "nfs-common". After I added the nfs-common for all container instances based on the image of the hypercube on the main and working nodes, the NFS shared resource started working normally (the mount was successful). So the point is here. I tested it in practice and it solved my problem.

+1
source

All Articles