Asynchronous application server and several blocking servers

tl; dr Many Rails applications or one Vertx / Play! Appendix?

I had conversations with other members of my team about the pros and cons of using an asynchronous application server such as Play! Framework (built on Netty) versus rotating multiple instances of the Rails application server.

I know that Netty is asynchronous / non-blocking, which means during a database request, network request or something like this asynchronous call it will allow the event loop thread to switch from a blocked request to another request, ready to be processed / serviced. This will cause that processors will be busy instead of locking and waiting.

I argue in favor or use something like Play! Framework or Vertx.io, that does not block ... Scalability. On the other hand, my team members say that you can get the same benefits by using multiple instances of the Rails application, which comes out of the box with only one thread and does not have true concurrency like JVM applications. They say they simply use enough application instances to match the performance of a single game! applications (or like many other Play! applications), and when the Rails application blocks the OS, it will switch processes to another Rails application. In the end, they say that the processors will do the same job and we will get the same performance.

So here are my questions:

  • Are there any logical errors in the above arguments? Will the OS manage instances of Rails applications, and will Netty (which also runs on the JVM, which displays threads in the kernels very well) handle requests in the event loop?
  • Will the OS be as effective as call blocking like Netty or Vertx, or even something built on Ruby's own EventMachine?
  • With enough Rails application instances to match Play! applications, will there be a noticeable difference in the cost of running the servers? If there is no difference in costs, in my opinion, it does not matter which method is used. Shoot if it would be financially cheaper to launch a million Rails applications than just one Play! app I would rather do this.
  • What are the other benefits of using any of these approaches that I may not ask about?
+4
source share
1 answer

Both approaches may have worked. Therefore, if the switchover is associated with a high development cost and / or schedule, then this is probably not worth the effort ... for now. Set the switch when costs become unacceptably high. Consider using microservices as an incremental switch strategy.

If you are at an early stage of your development cycle, then starting the switch early may make sense. Copying is pain.

Or, you may never have to switch, and the rails will work for your use case like a charm. And you were so successful that you made your customers happy that cash just rolls.

Some of the disadvantages of a blocking approach on a single server:

  • Increased memory usage . Sources: multiple processes, memory leaks, lack of shared data stores (which increases communication costs and causes problems with consistency).

  • Lack of parallelism . This has two consequences: more boxes and more latency . To handle the same load you will need a much larger volume of boxes. Therefore, if you need to scale and have money, then this can be a problem. If this is not a problem, it does not matter. On the server, this means an increased delay, that is, a delay that cannot be improved by multiplying processes, which may be the killer argument, depending on your application.

Some examples of those who made such a switch from rails to node.js and golang:

These messages are arguments that probably illustrate your group. The solution, unfortunately, is not obvious.

It depends on the nature of what you are building, the nature of your team, the nature of resources, the nature of your skills, the nature of your goals and how you evaluate all the various trade-offs.

Is it really worth the cost reduction? Is the same number of calculations performed regardless of the number of servers?

Depending on the type and scale of work performed. Usually web services are tied to IO, waiting for responses from other services, such as databases, caches, etc.

If you use a single-threaded server, the process is blocked in IO, so it does nothing. In contrast, a non-blocking server will be able to handle many requests, while a server with one thread will be blocked. You can continue to add processes, but there are only so many processes that one machine can run. A non-blocking server can have the same number of processes, while keeping the processor busy as convenient as possible for processing requests. When using non-blocking servers, it is often possible to handle higher loads on smaller, cheaper machines.

If the expected request speed can be processed by a valid number of mailboxes, and you do not expect large spikes, then you will work well with single-threaded servers. Non-blocking servers withstand surge loads without adding machines.

If your job is that response delays do not really matter, you can do with fewer nodes.

If your workload is CPU related, then you will need more boxes, because there will be no such possibility for parallelism, because servers will not block IO.

+8
source

All Articles