Twemproxy Lag forces a reboot

We use the PHP stack on our application servers that use local (via socket) twemproxy to connect up to several memcached servers (small EC2 instances) for our caching layer.

Each time I get a warning from our application monitor that the page load time takes> 5 seconds. When this happens, the immediate fix is ​​to restart the twemproxy service on each application server, which is the problem.

The only fix I have is crontab, which starts every minute and restarts the service, but as you can imagine, nothing is written for a few seconds every minute, which is not a desirable, permanent solution.

Has anyone come across this before? If so, what is the problem? I tried switching to AWS Elasticache, but it did not have the same performance as our current twemproxy solution.

Here is my twemproxy configuration.

default: auto_eject_hosts: true distribution: ketama hash: fnv1a_64 listen: /var/run/nutcracker/nutcracker.sock 0666 server_failure_limit: 1 server_retry_timeout: 600000 # 600sec, 10m timeout: 100 servers: - vcache-1:11211:1 - vcache-2:11211:1 

And here is the connection configuration for php level:

 # Note: We are using HA / twemproxy (nutcracker) / memcached proxy # So this isn't a default memcache(d) port # Each webapp will host the cache proxy, which allows us to connect via socket # which should be faster, as no tcp overhead # Hash has been manually override from default jenkins to FNV1A_64, which directly aligns with proxy port: 0 <?php echo Hobis_Api_Cache::TYPE_VOLATILE; ?>: options: - <?php echo Memcached::OPT_HASH; ?>: <?php echo Memcached::HASH_FNV1A_64; ?><?php echo PHP_EOL; ?> - <?php echo Memcached::OPT_SERIALIZER; ?>: <?php echo Memcached::SERIALIZER_IGBINARY; ?><?php echo PHP_EOL; ?> servers: - /var/run/nutcracker/nutcracker.sock 

We run 0.4.1 twemproxy and 1.4.25 memcached.

Thanks.

+7
php caching memcached twemproxy
source share
3 answers

In the end, I switched from a unix socket to a tcp port on localhost and seemed to solve the restart problem. However, I noticed a surge in response time when creating the switch due to the overhead associated with tcp. Without accepting this answer in the hope that someone along the way will post a more authoritative answer about sockets ...

0
source share

The number of open / aging socket connections can be a problem

+3
source share

I know little about twemproxy and memcached.but, I give a link to you for more details. Maybe it will be useful for you.

https://github.com/twitter/twemproxy

+1
source share

All Articles