Understanding the implementation of the hash and its memory in Redis

From the documentation we know, redis does compression for data within a range (default is 512). If the hash range exceeds 512, the memory difference will be 10 times.

I did a little experiment for hashes ranging from 1 to 512 and found an interesting pattern.

This graph represents the taken memory (in KB) for 1000 hashes, each of which contains entries from 1 to 512.

enter image description here

. . , - redis , . , 215 216 , 4 8 . 420 421 8 12 . 215 , 1/4, 1/5 1/6-.

:

  • - hashmap ? ?
  • , 215 216, 215 , , .
  • , 1 , 250 , 800 . 2 125 , 2 125 500 . , 300 , !!. ? - ?

+4
3

1000 redis, -, 512, jemalloc.

libc mem_allocator:

memDistrib-libc.svg

redis :

make MALLOC=libc

, .

:

  • - hashmap ? ?

    , , , redis. Jemalloc ,

  • , 215 216, 215, , .

    , ,

  • , 1 , 250 , 800 . 2 125 , 2 125 500 . , 300 , !!. ? - ?

    , . , , . : 1 2 , redis rehashing ( ), , , -.

+3

@sel-fish . . .

, , jemalloc libc. . , libc .

.

enter image description here

enter image description here

, , jemalloc , libc .

() libc. libc .

libc jemalloc. , libc.

.

+3

Yoy - Redis : Hash (part1) Redis : Hash (part2). :

  • hash ziplist dict.
  • You have a Redis hash fill factor value to double the hash size.

Keep in mind - Redis uses dictto handle key space. Therefore, every time you create a new key (of any type), you put it in an internal key hash table. So, here is the same logic - it grows like dictwhen you add new keys to Redis.

+1
source

All Articles