Is this a way to add a custom entry to kube-dns?

I use a very specific way to explain the problem, but I think it's better to be specific than an abstract problem ...

Let's say there is a mongo db replica installed outside the quaternet cluster, but on the network. The IP addresses of all members of the replica set were resolved by / etc / hosts on application servers and db servers.

In the experiment / transition phase, I need to access these mongo db servers from kubernetes pods. However, kubernetes does not seem to allow adding a user entry to / etc / hosts in containers / containers.

Mongo db replica sets already work with a large data set; creating a new replica set in a cluster is not an option.

Becaseu I use GKE, changing any resources in the kube-dns namespace should be avoided, I suppose. Configuring or replacing kube-dns to suit my needs is the last thing to try.

Is there a way to resolve the ip address of user hostnames in a kubernetes cluster?

This is just an idea, but if kube2sky can read some configmap entries and use them as DNS records, it will be useful. e.g. repl1.mongo.local: 192.168.10.100 .

EDIT: I referenced this question from https://github.com/kubernetes/kubernetes/issues/12337

+5
source share
4 answers

UPDATE: 2017-07-03 Kunbernetes 1.7 now supports adding entries to Pod / etc / hosts with HostAliases .


The solution is not related to kube-dns, but / etc / hosts. Anyway, the next trick seems to work so far ...

EDIT: Changing / etc / hosts may have a race state with the kubernet system. Try again.

1) create configMap

 apiVersion: v1 kind: ConfigMap metadata: name: db-hosts data: hosts: | 10.0.0.1 db1 10.0.0.2 db2 

2) Add a script called ensure_hosts.sh .

 #!/bin/sh while true do grep db1 /etc/hosts > /dev/null || cat /mnt/hosts.append/hosts >> /etc/hosts sleep 5 done 

Do not forget chmod a+x ensure_hosts.sh .

3) Add a wrapper script start.sh your image

 #!/bin/sh $(dirname "$(realpath "$0")")/ensure_hosts.sh & exec your-app args... 

Do not forget chmod a+x start.sh

4) Use configmap as the volume and run start.sh

 apiVersion: extensions/v1beta1 kind: Deployment ... spec: template: ... spec: volumes: - name: hosts-volume configMap: name: db-hosts ... containers: command: - ./start.sh ... volumeMounts: - name: hosts-volume mountPath: /mnt/hosts.append ... 
+2
source

For the record, an alternative solution for those who do not check the reference github issue .

You can define an β€œexternal” service in Kubernetes without specifying a single selector or ClusterIP. You must also identify the appropriate endpoint pointing to the external IP address.

From Kubernetes Documentation :

  {
     "kind": "Service",
     "apiVersion": "v1",
     "metadata": {
         "name": "my-service"
     },
     "spec": {
         "ports": [
             {
                 "protocol": "TCP",
                 "port": 80,
                 "targetPort": 9376
             }
         ]
     }
 }
 {
     "kind": "Endpoints",
     "apiVersion": "v1",
     "metadata": {
         "name": "my-service"
     },
     "subsets": [
         {
             "addresses": [
                 {"ip": "1.2.3.4"}
             ],
             "ports": [
                 {"port": 9376}
             ]
         }
     ]
 }

In this case, you can point your application in containers to my-service:9376 , and traffic should be redirected to 1.2.3.4:9376

Limitations:

  • The DNS name used must be only letters, numbers, or dashes . You cannot use multi-level names ( something.like.this ). This means that you may have to modify the application to point only to your-service , not yourservice.domain.tld .
  • You can only point to a specific IP address, not a DNS name. To do this, you can define a DNS alias using the ExternalName service.
+3
source

The type of external name is required to access hosts or ips outside the kubernetes.

The following worked for me.

 { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "tiny-server-5", "namespace": "default" }, "spec": { "type": "ExternalName", "externalName": "192.168.1.15", "ports": [{ "port": 80 }] } } 
+3
source

Using configMap seems like the best way to configure DNS, but it's a little tricky if you just add a few entries (in my opinion). Therefore, I am adding entries to /etc/hosts shell script executed by docker CMD .

eg:

Dockerfile

 ...(ignore) COPY run.sh /tmp/run.sh CMD bash /tmp/run.sh 

run.sh

 #!/bin/bash echo repl1.mongo.local 192.168.10.100 >> /etc/hosts # some else command... 

Note that if your β€œMORE THAN ONE" container is in the container, you must add a script to each container , because the kubernets start the container by accident, /etc/hosts may be overridden by another container (which start later).

+1
source

All Articles