There is absolutely no need to use protobuf with redis; The key is usually simple: choose a serialization structure that reliably returns your data today, tomorrow and next year. You could also use json, xml, etc. In many cases, a single string value is more than enough, completely bypassing serialization (unless you consider encoding as serialization).
I would advise against using platform-based serialization, as they may not help you if you need to return data to (say) C ++ after a year, and they are usually less flexible in terms of version control.
Protobuf is a smart choice as it has key features:
- low output (reduces bandwidth between applications and redis and storage requirements)
- CPU-efficient processing (reduces processing in your application)
- versioned
- cross platform
However, other serializers will work as well. You can even use plain text and redis hashes, i.e. A hash property for each property object. However, in most cases you want to get the whole object, so itβs easier to βgetβ and transfer the data to a suitable serialization API, as a rule, more appropriate.
In our own use of redis, we do use protobuf, but we also do the speculative "does protobuf exit with gzip at all?" - if so, we send gzip data (or save the original uncompressed data, if they are smaller) and, obviously, a marker to say what it is.
source share