Can MySQL Cluster handle terabyte database

I need to look for solutions to provide a MySQL database that can process data volumes in the terabyte range and be highly available (five nines). Each row in the database is likely to have a timestamp and up to 30 floating point values. The expected workload is up to 2500 inserts per second. Queries are likely to be less frequent, but can be large (possibly using 100 GB of data), although probably only with separate tables.

I look at MySQL Cluster, given that this is their HA proposal. Due to the amount of data I will need to use disk storage. Actually, I think that only timestamps can be stored in memory, and all other data should be stored on disk.

Does anyone have experience using MySQL Cluster in a database of this scale? Is it even viable? How does disk storage affect performance?

I am also open to other suggestions on how to achieve the desired availability for this amount of data. For example, would it be better to use a third-party library like Sequoia to handle the clustering of standard MySQL instances? Or a more direct MySQL replication solution?

The only condition is a MySQL-based solution. I don’t think MySQL is the best way to find the data we are dealing with, but it is a strict requirement.

+5
source share
4 answers

, . , , , .

, . , , .

+2

, , mySQL .

, , , , - 2500 /, , , 10 , - pessimal .

( , , 100 RS-422 9600 ( ) 300 ( .) , 1 / 300 = 300 , , .)

" 30 ". , , , , - 30 NULL.

, , 300Kbytes/sec 2500 TPS (, ). , , , .

+2

, MySQL.

+2

, MySQL 10 1/2 , 5 ;) , ?

+2

All Articles