I am trying to store records with a set of doubles and ints (about 15-20) in mongoDB. Records mainly (99.99%) have the same structure.
When I store data in root , which is a very structured data storage format, the file is around 2.5 GB for records of 22.5 million . However, for Mongo, the database size (from the show dbs command) is about 21 GB , while the data size (from db.collection.stats() ) is 13 GB .
This is a huge overhead ( Clarify: 13GB vs 2.5GB, Iām not even talking about 21GB ), and I think this is because it stores both keys and values , So the question is why and how Mongo does not do better reducing it?
But the main question: what is the impact of performance on this? I have 4 indexes, and they go beyond 3 GB , so running the server on one 8-gigabyte computer can be a problem if I double the amount of data and try to save a large working set in memory.
Any guesses if I should use SQL or some other DB? or maybe just keep working with ROOT files if anyone tried them?
database mongodb large-data
xcorat
source share