I am curious. I have such a stream: we have a large collection / table with a lot of data. And have some select oriented query that is long and takes 3 seconds.
However, we look at a highly competitive environment, and every second we get 100 new entries in our database.
So, let's say we have a request, and before the start of this request, we have 1000 elements that satisfy this request. The request takes 3 seconds, and each second contains 50 new elements that correspond to the request added to db. My questions are the result of this query for me (this is another 1000 or 1150 or something average) and how it depends on different database engines (SQL, NoSQL). ABOUT
This is not a question of the exact number, but a little more - why it will be that number.
It seems the question is a bit broad. Let the limit DB with MySQL, Postgres, MongoDB and Cassandra.
source share