Use mysql table efficiently to cache complex queries

In the queries I had a fairly large number of several matches.

To be able to use (at least) the built-in functions of MySql Cache, I wrote the following function, it just encodes the original request in base64, checks it there and has not expired.

This has greatly improved performance, and I have the advantage of specifying a cache request in the source code.

But during busy, the table becomes inaccessible due to deletion or the selection simply takes too much time. Are there any suggestions on what to do to make it faster and avoid the problem mentioned earlier?

Table:

CREATE TABLE `cachesql` ( `id` int(9) NOT NULL AUTO_INCREMENT, `expire` int(15) NOT NULL, `sql` text NOT NULL, `data` mediumtext NOT NULL, PRIMARY KEY (`id`,`sql`(360)), KEY `sdata` (`sql`(767)) USING HASH ) ENGINE=InnoDB 

function:

  function fetchRows_cache($sql,$cachetime,$dba){ // internal function (called by fetchRows) global $Site; $expire = 0; $this->connect($dba); // check if query is cached $this->q = mysql_query("SELECT `expire`,`data` from cachesql where `sql`='".base64_encode($sql)."' limit 1;", $this->con) OR $this->error(1, "query$".$sql."$".mysql_error()); $this->r = mysql_fetch_assoc($this->q); $expire = $this->r['expire']; $data = $this->r['data']; if (($expire < time())||($cachetime =="0")) { // record expied or not there -> execute query and store $this->query("DELETE FROM `cachesql` WHERE `sql`='".base64_encode($sql)."'",$dba); // delete old cached entries $this->q = mysql_query($sql, $this->con) OR $this->error(1, "query$".$sql."$".mysql_error()); $this->r=array(); $this->rc=0; while($row = mysql_fetch_assoc($this->q)){ $arr_row=array(); $c=0; while ($c < mysql_num_fields($this->q)) { $col = mysql_fetch_field($this->q, $c); $arr_row[$col -> name] = $row[$col -> name]; $c++; } $this->r[$this->rc] = $arr_row; $this->rc++; } $out = $this->r; // write results into cache table if ($cachetime != "0") { // not store cache values for now (too many locks) $this->query("INSERT INTO `cachesql` (`sql`,`data`,`expire`) VALUES ('".base64_encode($sql)."','".mysql_real_escape_string(serialize($out))."','".(time()+$cachetime)."')",$dba); } return $out; } else { // use Cached data return unserialize($data); } } 
+4
source share
3 answers

thanks @HeatfanJohn - he emphasizes something really simple and effective.

since the original requests are not used for anything (except for matching entries in the cache), it’s enough to just keep the checksum to uniquely identify the request.

the new structure simply stores the MD5 hash of the original Query (16 bytes), expireUnixTime, and serialized rowset

new Structure:

 CREATE TABLE `cachesql` ( `sql` varchar(32) NOT NULL, `expire` int(11) NOT NULL, `data` text NOT NULL, PRIMARY KEY (`sql`), UNIQUE KEY `sql` (`sql`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='cache for db->fetchRows' 

The primary sql column index responds very quickly, as it is very short, and can be indexed much better for search.

I have no other speed when using BLOB or TEXT fields for datasets.

0
source

I think the main slowdown is that you use InnoDB for your caching table.

I found out that you should use InnoDB for everything except read cache tables;)

MyISAM is especially good for read-intensive tables (select).

0
source

Let's try the inmemory table to make it faster.

0
source

All Articles