Cookie based load buckling for WebSockets?

My situation is that we are currently writing an online application that uses Node.js on the server side using a WebSocket listener. We have two different parts: one serves the pages and uses Node.js and express + ejs, the other is a completely different application that includes only the socket.io library for websockets. So, the scalability of the websockets part relates to this problem.

One of the solutions we found was to use redis and distribute socket information between the servers, but due to the architecture, sharing of downloads of other information would be required, which would create huge server overhead.

After this introduction, my question is: is it possible to use cookie-based load balancing for websites? So let's say that each connection of the user with the cookie server = server1 will always be redirected to server1, and each connection with the cookie server = server2 will be fw on server2, and the connection without such a cookie will work on the least loaded server.

UPDATE: As one “answer” says - yes, I know that it exists. I just don’t remember that this is a sticky session. But the question is, will this work on websites? Are there any complications?

+7
source share
1 answer

We had a similar problem in our Node.js. stack. We have two servers using WebSockets that work for normal use cases, but sometimes the load balancer refuses such a connection between the two servers, which can cause problems. (We have a backend session code that was supposed to fix it, but didn't handle it properly.)

We tried to enable Sticky Session on the Barracuda load balancer in front of these servers, but found that it was blocking WebSocket traffic because of how it works. I didn’t know exactly why, because there is little information available on the Internet, but it seems that this is due to the balancer removing the headers for the HTTP request, capturing the cookie and redirecting the request to the correct server. Since WebSockets starts as HTTP, but then updates, the load balancer did not notice the difference in connection and tried to perform the same HTTP processing. This will cause the WebSocket connection to fail, disconnecting the user.

We are currently doing very well. We still use Barracuda load balancers in front of our server servers, but we don’t have sticky sessions on load balancers. On our server servers, in front of our application server, is HAProxy, which correctly supports WebSockets and can provide Sticky Sessions in a round mode.


Request Mailing List

  • An incoming client request falls into the main load balancer Barracuda
  • Download the balancer to any of the active server servers
  • HAProxy receives the request and checks the new "sticky cookie"
  • Based on the cookie, HAProxy is redirected to the correct application server with a backend

Request Flow Diagram

WebSocket Request /--> Barracuda 1 -->\ /--> Host 1 -->\ /--> App 1 -------------------> --> --> \--> Barracuda 2 -->/ \--> Host 2 -->/ \--> App 1 

When the arrows return to a single request, this means that the request can flow to any point in the stream.


HAProxy Configuration Information

 backend app_1 cookie ha_app_1 insert server host1 10.0.0.101:80011 weight 1 maxconn 1024 cookie host_1 check server host2 10.0.0.102:80011 weight 1 maxconn 1024 cookie host_2 check 

In the above configuration:

  • cookie ha_app_1 insert is the name of the cookie used
  • cookie host_1 check or cookie host_2 check sets the cookie value
+5
source

All Articles