Doug's suggestion with redis is not bad, but I think its code is a bit complicated and, as a result, quite slow. For my purposes, I had to serialize + write, and then capture + deserialize a square matrix of about a million floats in less than a tenth of a second, so I did this:
For the record:
snapshot = np.random.randn(1024,1024) serialized = snapshot.tobytes() rs.set('snapshot_key', serialized)
Then for reading:
s = rs.get('snapshot_key') deserialized = np.frombuffer(s).astype(np.float32) rank = np.sqrt(deserialized.size).astype(int) snap = deserialized(rank, rank)
You can do basic performance testing using ipython using% time, but neither tobytes nor frombuffer will take more than a few milliseconds.
source share