You did not indicate what the data structure is, how to consolidate it, how quickly data should be available to users, and how the consolidation process can be simplified.
However, the most immediate problem will drop 5,000 lines per second. You will need a very large, very fast machine (perhaps a fragment of a cluster).
If possible, I would recommend writing a consolidation buffer (using a hash table in memory rather than in the DBMS) to place the consolidated data, even if it is only partially consolidated, and then update from this to the processed data table than try to fill it directly from rawData.
In fact, I would probably consider splitting raw and consolidated data into separate servers / clusters (the MySQL merge mechanism was convenient for providing a single view of the data).
Have you analyzed your queries to find out which indexes you really need? (hint - this script is very useful for this).
source share