Kubernetes Ingress (GCE) continues to return error 502

I am trying to configure Ingress in GCE Kubernetes. But when I visit the IP address and path combination defined in Ingress, I keep getting the following 502 error:

Error Ingress 502


Here's what I get when I start: kubectl describe ing --namespace dpl-staging

 Name: dpl-identity Namespace: dpl-staging Address: 35.186.221.153 Default backend: default-http-backend:80 (10.0.8.5:8080) TLS: dpl-identity terminates Rules: Host Path Backends ---- ---- -------- * /api/identity/* dpl-identity:4000 (<none>) Annotations: https-forwarding-rule: k8s-fws-dpl-staging-dpl-identity--5fc40252fadea594 https-target-proxy: k8s-tps-dpl-staging-dpl-identity--5fc40252fadea594 url-map: k8s-um-dpl-staging-dpl-identity--5fc40252fadea594 backends: {"k8s-be-31962--5fc40252fadea594":"HEALTHY","k8s-be-32396--5fc40252fadea594":"UNHEALTHY"} Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 15m 15m 1 {loadbalancer-controller } Normal ADD dpl-staging/dpl-identity 15m 15m 1 {loadbalancer-controller } Normal CREATE ip: 35.186.221.153 15m 6m 4 {loadbalancer-controller } Normal Service no user specified default backend, using system default 

I think the problem is dpl-identity:4000 (<none>) . Should I see the dpl-identity service IP address instead of <none> ?

Here is my description of the service: kubectl describe svc --namespace dpl-staging

 Name: dpl-identity Namespace: dpl-staging Labels: app=dpl-identity Selector: app=dpl-identity Type: NodePort IP: 10.3.254.194 Port: http 4000/TCP NodePort: http 32396/TCP Endpoints: 10.0.2.29:8000,10.0.2.30:8000 Session Affinity: None No events. 

Also, here is the output: kubectl describe ep -n dpl-staging dpl-identity

 Name: dpl-identity Namespace: dpl-staging Labels: app=dpl-identity Subsets: Addresses: 10.0.2.29,10.0.2.30 NotReadyAddresses: <none> Ports: Name Port Protocol ---- ---- -------- http 8000 TCP No events. 

Here is my deployment.yaml file:

 apiVersion: v1 kind: Secret metadata: namespace: dpl-staging name: dpl-identity type: Opaque data: tls.key: <base64 key> tls.crt: <base64 crt> --- apiVersion: v1 kind: Service metadata: namespace: dpl-staging name: dpl-identity labels: app: dpl-identity spec: type: NodePort ports: - port: 4000 targetPort: 8000 protocol: TCP name: http selector: app: dpl-identity --- apiVersion: extensions/v1beta1 kind: Ingress metadata: namespace: dpl-staging name: dpl-identity labels: app: dpl-identity annotations: kubernetes.io/ingress.allow-http: "false" spec: tls: - secretName: dpl-identity rules: - http: paths: - path: /api/identity/* backend: serviceName: dpl-identity servicePort: 4000 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: namespace: dpl-staging name: dpl-identity kind: Ingress metadata: namespace: dpl-staging name: dpl-identity labels: app: dpl-identity annotations: kubernetes.io/ingress.allow-http: "false" spec: tls: - secretName: dpl-identity rules: - http: paths: - path: /api/identity/* backend: serviceName: dpl-identity servicePort: 4000 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: namespace: dpl-staging name: dpl-identity labels: app: dpl-identity spec: replicas: 2 strategy: type: RollingUpdate template: metadata: labels: app: dpl-identity spec: containers: - image: gcr.io/munpat-container-engine/dpl/identity:0.4.9 name: dpl-identity ports: - containerPort: 8000 name: http volumeMounts: - name: dpl-identity mountPath: /data volumes: - name: dpl-identity secret: secretName: dpl-identity 
+15
google-compute-engine google-container-engine kubernetes google-cloud-platform google-kubernetes-engine
source share
6 answers

Your k8s-be-32396--5fc40252fadea594 displayed as "UNHEALTHY" .

Ingress will not redirect traffic if the backend is UNHEALTHY, this will result in the 502 error you see.

It will be marked as IMMEDIATE because it does not pass the health check, you can check the health check setting for k8s-be-32396-5fc40252fadea594 to find out if they are suitable for your container, maybe a URI poll or a port that does not return Answer 200. These parameters can be found in the section "Engine Calculation"> "Health Checks".

If they are correct, there are many steps between your browser and the container that may not properly transmit traffic, you can try kubectl exec -it PODID -- bash (or ashes if you use Alpine), and then try to round down localhost to see if the container is responding as expected, if there is one, and health checks are also configured correctly, this narrowed the problem, probably with your service, you could try changing the service from NodePort type to LoadBalancer and see if the IP address of the service directly from your browser is working .

+24
source share

I had the same problem and it persisted after I included livenessProbe as well as readinessPorbe . This turned out to be related to basic auth. I added basic auth to livenessProbe and readinessPorbe , but it turned out that the GCE HTTP (S) load balancer does not have a configuration option for this.

There seem to be a few other other issues, for example. setting the container port to 8080, and the service port up to 80 did not work with the GKE input controller (although I would not clearly indicate what the problem is). And in general, it seems to me that there is very little visibility, and launching your own input container is the best option regarding visibility.

I chose Traefik for my project, it worked out of the box, and I would like to enable Let Encrypt integration. The only change I had to make in Traefik manifests was to configure the service object to restrict access to the user interface from outside the cluster and expose my application through an external load balancer (GCE TCP LB). In addition, Traefik is more native to Kubernetes. I tried Heptio Contour, but something didn’t work out of the box (next time it will come out the next time a new version comes out).

+1
source share

The problem is really a health check and seemed "random" for my applications, where I used name-based virtual hosts to reverse the proxy request from entering through domains to two separate backend services. Both were protected with Lets Encrypt and kube-lego . My solution was to standardize the path for health checks for all services sharing the input and declare readinessProbe and livenessProbe in my deployment.yml file.

I ran into this problem with the Google cloud cluster node version 1.7.8 and found this problem that was very similar to what I experienced: * https://github.com/jetstack/kube-lego/issues/27

I use gce and kube-lego , and my backend service health checks on / and kube-lego are on /healthz . It seems that different paths for health checks using gce ingress may be the reason, so it can be useful to update the backend services to match the /healthz pattern, so they are all the same (or as one commenter on Github, they said they updated kube -lego to go to / ).

0
source share

I had the same problem. It turns out that I had to wait a few minutes before logging in to check the service is working. If someone goes the same way and follows all the steps, such as readinessProbe and linvenessProbe , just make sure your login points to a service that is either NodePort , and wait a few minutes until the yellow warning icon turns green. Also, check the log on StackDriver to better understand what is going on. My readinessProbe and livenessProbe are on /login for the gce class. Therefore, I do not think it should be on /healthz .

0
source share

The Limitations section of the kubernetes documentation reads:

All Kubernetes services must serve 200 pages in "/", or any other custom value that you specify in the --health-check-path argument GLBC --health-check-path argument .

https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc#limitations

0
source share

I solved the problem

  1. Remove service from login definition
  2. Expand ingress kubectl apply -f ingress.yaml
  3. Add service to login definition
  4. Expand Entry Again

Essentially, I followed Roy's advice and tried to turn it off and on again.

0
source share

All Articles