Configure kubectl command to access remote kubernetes cluster on azure

I have a kubernetes cluster running on azure. How can I access the cluster from the local kubectl command. I mentioned here , but there is no kube configuration file on the main node kubernet. In addition, kubectl config displays the results in

apiVersion: v1 clusters: [] contexts: [] current-context: "" kind: Config preferences: {} users: [] 
+21
azure kubernetes
source share
7 answers

Found a way to access a remote kubernetes cluster without ssh'ing to one of the nodes in the cluster. You need to edit the ~ / .kube / config file as shown below:

 apiVersion: v1 clusters: - cluster: server: http://<master-ip>:<port> name: test contexts: - context: cluster: test user: test name: test 

Then set the context by doing:

 kubectl config use-context test 

After that, you will be able to interact with the cluster.

Note. To add the certification and key, use the following link: http://kubernetes.io/docs/user-guide/kubeconfig-file/

Alternatively, you can also try the following command

 kubectl config set-cluster test-cluster --server=http://<master-ip>:<port> --api-version=v1 kubectl config use-context test-cluster 
+45
source share

You can also specify the path to the kubeconfig file by passing the --kubeconfig .

For example, copy ~/.kube/config remote Kubernetes host to your local project ~/myproject/.kube/config . Then, in ~/myproject you can view the list of modules for the remote Kubernetes server by running kubectl get pods --kubeconfig./.kube/config .

Note that when copying values ​​from a remote Kubernetes server, a simple kubectl config view will not be enough, because it does not display the secrets of the configuration file. Instead, you should do something like cat ~/.kube/config or use scp to get the full contents of the file.

See: https://kubernetes.io/docs/tasks/administer-cluster/share-configuration/

+7
source share

How did you create your cluster? To access the cluster remotely, you need a kubeconfig file (it seems you don't have one), and installation scripts generate a local kubeconfig file as part of the cluster deployment process (since otherwise the cluster that you just deployed is not used), If someone else has deployed the cluster, you must follow the instructions on the page that you contacted to get a copy of the necessary client credentials to connect to the cluster.

0
source share

Configuring Azure provides only external ssh ports. It can be found at. / output / kube _xxxxxxxxxx_ssh_conf What I did is the ssh tunnel, which will be available on my machine by adding the ssh port tunnel. Go to the file above and in the "host *" section add another line as shown below:

LocalForward 8080 127.0.0.1:8080

which maps my local machine port 8080 (where kubectl is looking for the default context) to port 8080 of the remote machine, where the master listens for api calls. when you open ssh on kube-00, since regular document shows can now make calls from your local kubectl without any additional configuration.

0
source share

I tried to configure kubectl to another client from the one from which I created the kops cluster initially. Not sure if this will work on Azure, but it works on an AWS-enabled cluster (kops):

kops / kubectl - how do I import state created on another server?

0
source share

Find the .kube directory on your k8s computer.
On Linux / Unix, it will be in /root/.kube
On windows, it will be in C: / User //. Kube
copy the configuration file from the .kube folder of the k8s cluster to the .kube folder of your local computer
Copy the client certificate: /etc/cfc/conf/kubecfg.crt
client key: /etc/cfc/conf/kubecfg.key
to the .kube folder of your local machine.
Edit the configuration file in the .kube folder of your local computer and update the path to the kubecfg.crt and kubecfg.key files on your local computer.
/etc/cfc/conf/kubecfg.crt β†’ C: \ Users \ .kube \ kubecfg.crt
/etc/cfc/conf/kubecfg.key β†’ C: \ Users \ .kube \ kubecfg.key
You should now be able to interact with the cluster. Run kubectl get pods and you will see pods in the k8s cluster.

0
source share

For clusters that are created manually using vm cloud providers, just get kubeconfig from ~/.kube/config . However, for managed services such as GKE, you have to rely on gcloud to get kubeconfig generated at runtime with the correct token.

As a rule, you can create a service account that will help you get the correct kubeconfig with the token generated for you. Something similar can also be found in Azure.

0
source share

All Articles