Architectural design of a heavy recording application

I am dealing with a real-time tracking system, where one device can collect about 2 million GPS points every year (i.e. 1 point every 5 seconds, working for 8 hours for 365 days). If it works globally with thousands of devices, it leads to billions of records per year.

I know that SQL Server can handle this. But I need to be able to do real-time tracking with thousands of devices doing parallel recordings. It works fine with multiple devices, but I see that it is the processor intensity when I open many tracking sites.

I plan to try:

  • Mongo db
  • Socket approach with kazzing.

Any alternative suggestions?

+6
source share
1 answer

Given the information you posted, there is nothing wrong with your architecture. However, the devil is in the details. Firstly, a lot depends on how well designed your database is. It depends on how well written your queries, db indices, triggers, etc.

In addition, if this is a mobile device of any type, you should not use a traditional socket-based connector. You cannot depend on a stable tcp connection to a remote server. You should use stateless architecture like REST to expose / write your data for you. REST is very easy to implement in .NET btw. This should move the complexity of the scale from the database to the web server.

Finally, to minimize the work performed on the server, I would use some caching system or buffer pool to support the data on each device to read and create a write cache to send data to the central server. The write cache will be vital since you cannot depend on a stable tcp connection with transaction management from the server. You need to save the cache of the data that you want to write, that is (the queue), and set the queue when you have confirmation from the server that it received the data that you wrote. The queue should appear whenever there is data and a data connection. However, I will need to learn more about your requirements before I can say for sure or give more detailed information.

+3
source

All Articles