I have a Meteor application deployed using Kubernetes on Google Cloud configured with Nginx acting as SSL termination. Everything is working fine.
However, it seems that if two different clients connect to two different SSL containers, the updates do not appear in the corresponding applications for up to 10 seconds, which makes the websites seem to be down, but the polling is effecting. I have confirmed that all clients are connected to web sockets, but since updates are not immediately distributed, it is possible that Nginx is not configured to talk correctly with the Meteor application.
Here is my SSL / Nginx service:
apiVersion: v1 kind: Service metadata: name: frontend-ssl labels: name: frontend-ssl spec: ports: - name: http port: 80 targetPort: 80 - name: https port: 443 targetPort: 443 selector: name: frontend-ssl type: LoadBalancer loadBalancerIP: 123.456.123.456 sessionAffinity: ClientIP
And here is the Meteor service:
apiVersion: v1 kind: Service metadata: name: frontend labels: name: frontend spec: ports: - port: 3000 targetPort: 3000 selector: name: flow-frontend type: LoadBalancer loadBalancerIP: 123.456.123.456 sessionAffinity: ClientIP
To complete SSL, I use the Kubernetes proposed SSL setting for forked with add-ons in Websockets https://github.com/markoshust/nginx-ssl-proxy
ssl proxy nginx meteor kubernetes
Mark shust
source share