Is protobuf required in redis?

I am working on creating a keystore using redis. I suggested using hashmap type String(key)-->Object(value) . I am advised to serilaize object using protobuf .

If we are going to fill in as well as read this data using Java (regardless of the platform), are there any advantages to using protobuf ? Will it simply drag and drop an object into redis and return it, and casting it will lead to some problems?

A lot of attention is paid to the effectiveness in this product, so we do not want to do unnecessary processing.

+4
source share
1 answer

There is absolutely no need to use protobuf with redis; The key is usually simple: choose a serialization structure that reliably returns your data today, tomorrow and next year. You could also use json, xml, etc. In many cases, a single string value is more than enough, completely bypassing serialization (unless you consider encoding as serialization).

I would advise against using platform-based serialization, as they may not help you if you need to return data to (say) C ++ after a year, and they are usually less flexible in terms of version control.

Protobuf is a smart choice as it has key features:

  • low output (reduces bandwidth between applications and redis and storage requirements)
  • CPU-efficient processing (reduces processing in your application)
  • versioned
  • cross platform

However, other serializers will work as well. You can even use plain text and redis hashes, i.e. A hash property for each property object. However, in most cases you want to get the whole object, so it’s easier to β€œget” and transfer the data to a suitable serialization API, as a rule, more appropriate.

In our own use of redis, we do use protobuf, but we also do the speculative "does protobuf exit with gzip at all?" - if so, we send gzip data (or save the original uncompressed data, if they are smaller) and, obviously, a marker to say what it is.

+10
source

Source: https://habr.com/ru/post/1416336/


All Articles