Redis constantly gets a large amount of memory and grows until it is killed by OOM

Currently, my 8-gigabyte RAM server uses up to 5.33 GB for Redis (other parts of the server occupy about 1.6 GB, so even immediately after rebooting the server I already have ~ 7 GB of RAM [88%]). Redis memory usage continues to grow until it is destroyed by UOM Ubuntu, which will cause errors for my node application.

I have added Redis INFO output at the bottom of this post. Initially, I thought that there could be too many keys in redis, but I read from Redis ( http://redis.io/topics/faq ) that 1 million keys is ~ 100 MB. We have about 2 million (~ 200 MB - nowhere around 5 GB), so this could not be a problem.

My questions are: - Where does redis consume all this memory? The keyboard space does not take up much space. - What can I do to stop continuous memory consumption?

Thank!

# Server
redis_version:2.8.6
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:f73a208b84b18824
redis_mode:standalone
os:Linux 3.2.0-55-virtual x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.6.3
process_id:1286
run_id:6d3daee5341a549dfaca63706c40c44086198317
tcp_port:6379
uptime_in_seconds:1390
uptime_in_days:0
hz:10
lru_clock:771223
config_file:/etc/redis/redis.conf

# Clients
connected_clients:198
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:72

# Memory
used_memory:5720230408
used_memory_human:5.33G
used_memory_rss:5826732032
used_memory_peak:5732485800
used_memory_peak_human:5.34G
used_memory_lua:33792
mem_fragmentation_ratio:1.02
mem_allocator:jemalloc-3.5.0

# Persistence
loading:0
rdb_changes_since_last_save:94
rdb_bgsave_in_progress:0
rdb_last_save_time:1412804004
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:40
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok

# Stats
total_connections_received:382
total_commands_processed:36936
instantaneous_ops_per_sec:0
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:2421
keyspace_misses:1
pubsub_channels:1
pubsub_patterns:9
latest_fork_usec:1361869

# Replication
role:master
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

# CPU
used_cpu_sys:15.95
used_cpu_user:101.34
used_cpu_sys_children:12.55
used_cpu_user_children:146.17

# Keyspace
db0:keys=2082234,expires=1162351,avg_ttl=306635722644
+4
source share
1 answer

Thanks for the answer Itamar. I was under the false (and really didn’t think enough) that the keys and values ​​would be about the same size. It turns out that there were some hashes stored from kie, which were 10 KB each, and we had hundreds of thousands of them. Removing these guys worked.

Thanks again.

+1
source

All Articles