Using the Kernel # plug for background processes, pros? minuses?

I would like to think about whether using fork {} in the 'background' process from the rails application is such a good idea or not ...

From what I am collecting fork {my_method; The process #setsid} really does what it should do.

1) creates other processes with a different PID

2) does not interrupt the calling process (for example, it continues without waiting for the plug to complete)

3) performs the child process until it finishes

.. It's cool, but is it a good idea? What exactly does the plug do? Does it create a duplicate instance of my entire mongrel / passenger rail in mind? If so, that would be very bad. Or does it somehow do this without consuming a huge amount of memory.

My ultimate goal was to end my daemon / queue system in the background in favor of marking up these processes (primarily sending emails), but if that doesn't save memory, then this is definitely a step in the wrong direction

+6
ruby ruby-on-rails background delayed-job backgroundrb
source share
3 answers

The plug makes a copy of your entire process and, depending on how you are connected to the application server, also a copy of this. As noted in another discussion, this is done using copy-on-write, so it is bearable. In the end, Unix is โ€‹โ€‹built around fork (2), so it should manage it pretty quickly. Note that any partially buffered I / O, open files, and many other materials are also copied, as well as the state of the program that is spring loaded to write them, which would be incorrect.

I have a few considerations:

  • Are you using Action Mailer? It looks like email will be easily done using AM or Process.popen of something. (Popen will make the plug, but exec immediately follows.)
  • get rid of all this state immediately by running Process.exec another ruby โ€‹โ€‹interpreter plus your functions. If there are too many states to transfer, or you really need to use these duplicate file descriptors, you can do something like IO#popen so you can send the subprocess to work. The system will automatically split pages containing the text of the Ruby interpreter subprocess with the parent.
  • in addition to the above, you might consider using daemons . While the process of your rails is already a daemon, using a gem can make it easier to run a single background task as a batch server and make it easier to start, monitor, restart if it is bombing, and turn it off when you do it ..
  • If you exit the fork(2) ed subprocess, use exit! instead of exit
  • with a message queue and an already installed daemon, like you, it seems like a good solution for me :-)
+4
source share

Remember that this will not allow you to use JRuby on Rails, since fork () is not implemented (yet).

+1
source share

The semantics of fork is to copy the entire memory space of a process into a new process, but many (most?) Systems will do this by simply making a copy of the virtual memory tables and marking it copy to write, This means that (at least at least least) it does not use much larger physical memory, sufficient to create new tables and other data structures for each process.

However, I'm not sure how good Ruby, RoR, etc. interact with copy to write. In particular, garbage collection can be problematic if it affects many pages of memory (which leads to their copying).

0
source share

All Articles