Both approaches may have worked. Therefore, if the switchover is associated with a high development cost and / or schedule, then this is probably not worth the effort ... for now. Set the switch when costs become unacceptably high. Consider using microservices as an incremental switch strategy.
If you are at an early stage of your development cycle, then starting the switch early may make sense. Copying is pain.
Or, you may never have to switch, and the rails will work for your use case like a charm. And you were so successful that you made your customers happy that cash just rolls.
Some of the disadvantages of a blocking approach on a single server:
Increased memory usage . Sources: multiple processes, memory leaks, lack of shared data stores (which increases communication costs and causes problems with consistency).
Lack of parallelism . This has two consequences: more boxes and more latency . To handle the same load you will need a much larger volume of boxes. Therefore, if you need to scale and have money, then this can be a problem. If this is not a problem, it does not matter. On the server, this means an increased delay, that is, a delay that cannot be improved by multiplying processes, which may be the killer argument, depending on your application.
Some examples of those who made such a switch from rails to node.js and golang:
These messages are arguments that probably illustrate your group. The solution, unfortunately, is not obvious.
It depends on the nature of what you are building, the nature of your team, the nature of resources, the nature of your skills, the nature of your goals and how you evaluate all the various trade-offs.
Is it really worth the cost reduction? Is the same number of calculations performed regardless of the number of servers?
Depending on the type and scale of work performed. Usually web services are tied to IO, waiting for responses from other services, such as databases, caches, etc.
If you use a single-threaded server, the process is blocked in IO, so it does nothing. In contrast, a non-blocking server will be able to handle many requests, while a server with one thread will be blocked. You can continue to add processes, but there are only so many processes that one machine can run. A non-blocking server can have the same number of processes, while keeping the processor busy as convenient as possible for processing requests. When using non-blocking servers, it is often possible to handle higher loads on smaller, cheaper machines.
If the expected request speed can be processed by a valid number of mailboxes, and you do not expect large spikes, then you will work well with single-threaded servers. Non-blocking servers withstand surge loads without adding machines.
If your job is that response delays do not really matter, you can do with fewer nodes.
If your workload is CPU related, then you will need more boxes, because there will be no such possibility for parallelism, because servers will not block IO.