How to share APC user cache between CLI instances and Web Server?

I use PHP APC to store a lot of information (using apc_fetch() , etc.). This information sometimes needs to be analyzed and removed elsewhere.

The story goes, I get several hundred hits / sec. These hits increment various counts (using apc_inc() and friends). Every hour, I would like to iterate over all the values ​​that I have accumulated, and do other processing with them, and then save them to disk.

I could do this as a random or temporary switch in each request, but this is a potentially long operation (it may take 20-30 seconds, if not several minutes), and I do not want to hang on this request for a long time.

I thought a simple PHP cronjob task would do this task. However, I can’t even get him to read the cache information.

 <?php print_r(apc_cache_info()); ?> 

Yeilds is apparently another APC memory segment, with:

[num_entries] => 1

(The only entry seems to be the operation code cache)

While my web server running from nginx / php5-fpm gives:

[num_entries] => 3175

Thus, they obviously do not share the same piece of memory. How can I either access the same piece of memory in a CLI script (preferred), or if it is simply not possible, then what is the safest way to execute a long sequence, say, a random HTTP request every hour?

For the latter, using register_shutdown_function() and immediately set_time_limit(0) and ignore_user_abort(true) perform a trick to guarantee completion and does not freeze any browser?

And yes, I know redis, memcache, etc., which would not have this problem, but now I stick with APC, since none of them can demonstrate the same speed as APC.

+4
source share
1 answer

This is really a design problem and a matter of choosing preferred costs over payments.

You are thrilled with the speed of APC, as you don’t waste time storing data. You also want to save data, but now the performance hit is too big. You have to balance them somehow.

If persistence is important, delete and save (file, database, etc.) for each request. If speed is all that you care about, do not change anything - this whole question becomes controversial. There are caching systems with persistent storage that can optimize your recordings on disk by aggregating what is written to disk, and when you usually always win between them with different rollover points. You just need to choose which one suits your goals.

There will probably never be a sustainable, useful technological solution for wolves that are saturated, and a lamb as a whole.

If you really have to do it your own way, you may have a cron in which CURLs requests a special request to your application, which will save the cache to disk. This way you control the request, its timeout, etc., and there is no need to worry about everything that users can do to kill their requests.

The potential risks in this case, however, are data integrity (since you will write the cache to disk while it is being updated by other requests at the same time), as well as requests serviced while you keep the cache, your server performance is busy.

Essentially, we introduced a bundle of hay into the wolf / lamb dilemma;)

0
source

All Articles