Improve performance in a large MySQL table

I would like to ask a question about how to improve performance in a large MySQL table using innodb mechanism:

There is currently a table of 200 million rows in my database. This table periodically stores data collected by various sensors. The structure of the table is as follows:

CREATE TABLE sns_value (
    value_id int(11) NOT NULL AUTO_INCREMENT,
    sensor_id int(11) NOT NULL,
    type_id int(11) NOT NULL,
    date timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
    value int(11) NOT NULL,
    PRIMARY KEY (value_id),
    KEY idx_sensor id (sensor_id),
    KEY idx_date (date),
    KEY idx_type_id (type_id) );

At first I thought about dividing the table in months, but due to the constant addition of new sensors, it would reach the current size in about a month.

Another solution I came up with was splitting the table into sensors. However, because of the limit of 1024 MySQL partitions, this is not an option.

I believe that the correct solution would be to use a table with the same structure for each of the sensors:

sns_value_XXXXX

, 1000 30 . .

? ?

:

  • 2xCPU 8
  • LAMP (CentOS 6.5 MySQL 5.1.73)

(CO, CO2 ..).

:

1) (avg, max, min):

SELECT round(avg(value)) as mean, min(value) as min, max(value) as max, type_id
FROM sns_value
WHERE sensor_id=1 AND date BETWEEN '2014-10-29 00:00:00' AND '2014-10-29 12:00:00'
GROUP BY type_id limit 2000;

5 .

2) :

SELECT sns_value.date AS date,
sum((sns_value.value * (1 - abs(sign((sns_value.type_id - 101)))))) AS one,
sum((sns_value.value * (1 - abs(sign((sns_value.type_id - 141)))))) AS two,
sum((sns_value.value * (1 - abs(sign((sns_value.type_id - 151)))))) AS three
FROM sns_value
WHERE sns_value.sensor_id=1 AND sns_value.date BETWEEN '2014-10-28 12:28:29' AND '2014-10-29     12:28:29'
GROUP BY sns_value.sensor_id,sns_value.date LIMIT 4500;

5 .

  • - .
  • .
  • .

, " "

  • , .
  • .
  • .

02/02/2015

, . 250 365 . , (sensor_id, date, type_id, value), 30 2 . ( ), ( ).

, .

- ? ?

!

+4
3

, , . :

  • MySQL . .
  • , ( ) .
  • , , .

. value_id. , , , , .

( date, : , . ts, .)

: int(11) value_id. . bigint(20) .

. , . .

SELECT round(avg(value)) as mean, min(value) as min, max(value) as max,
       type_id
  FROM sns_value
 WHERE sensor_id=1
  AND date BETWEEN '2014-10-29 00:00:00' AND '2014-10-29 12:00:00'
GROUP BY type_id limit 2000;

sensor_id , date, type_id. , value. , (sensor_id, date, type_id, value) . - , , 5 .

.

SELECT sns_value.date AS date,
       sum((sns_value.value * (1 - abs(sign((sns_value.type_id - 101)))))) AS one,
       sum((sns_value.value * (1 - abs(sign((sns_value.type_id - 141)))))) AS two,
       sum((sns_value.value * (1 - abs(sign((sns_value.type_id - 151)))))) AS three
  FROM sns_value
 WHERE sns_value.sensor_id=1
   AND sns_value.date BETWEEN '2014-10-28 12:28:29' AND '2014-10-29 12:28:29'
 GROUP BY sns_value.sensor_id,sns_value.date
 LIMIT 4500;

, sensor_id, date. type_id, value. , , , .

CREATE TABLE sns_value (
    value_id  bigint(20) NOT NULL AUTO_INCREMENT,
    sensor_id int(11) NOT NULL,
    type_id   int(11) NOT NULL,
    ts        timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
    value int(11) NOT NULL,
  PRIMARY KEY        (value_id),
  INDEX    query_opt (sensor_id, ts, type_id, value)
);
+1

.

auto_increment , . DB- .

, , .

EDIT: . , . , reserverd.

CREATE TABLE snsXX_readings (
    sensor_id int(11) NOT NULL,
    reading int(11) NOT NULL,
    reading_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
    type_id int(11) NOT NULL,

    PRIMARY KEY (reading_time, sensor_id, type_id),
    KEY idx date_idx (date),
    KEY idx type_id (type_id) 
);

, .

0

You can try to get randomized summary data.

I have a similar table. myisam engine table (smallest table size), record 10 m, index on my table, because it is useless (verified). Get the full range for all data. Result: 10sn this query.

SELECT * FROM (
        SELECT sensor_id, value, date 
        FROM sns_value l 
        WHERE l.sensor_id= 123 AND 
        (l.date BETWEEN '2013-10-29 12:28:29' AND '2015-10-29 12:28:29') 
        ORDER BY RAND() LIMIT 2000 
    ) as tmp
    ORDER BY tmp.date;

This query in the first step goes between dates and sorting in order to randomize the first 2k data on the sorting data of the second step. the request each time gets a 2k result for different data.

0
source

All Articles