Which is faster / more efficient - many small MySQL queries or one large PHP array?

I have a PHP / MySQL based web application that supports multiple language support using the MySQL table "language_strings" with fields string_id, lang_id, lang_text. Then I call the following function when I need to display a string in the selected language ...

public function get_lang_string($string_id,$lang_id) { $db=new Database(); $sql=sprintf("SELECT lang_string FROM language_strings WHERE lang_id IN (1, %s) AND string_id=%s ORDER BY lang_id DESC LIMIT 1", $db->escape($lang_id, "int"), $db->escape($string_id,"int")); $row=$db->query_first($sql); return $row['lang_string']; } 

This works fine, but I am concerned that there may be many database queries. for example, in the main menu there are 5 link texts, all of which call this function.

Would it be faster to load all the results of the language_strings table for the selected lang_id into a PHP array and then call it from a function? This would potentially be a huge array with mostly redundant, but obviously it would be a single database query for pageload instead of lots.

Can someone suggest another more efficient way to do this?

+8
performance arrays php mysql
source share
6 answers

OK. I did some benchmarking and was surprised to find that putting things in an array, rather than using individual queries, was an average of 10-15% SLOWER.

I think the reason is that even if I filtered out the “unusual” elements, inevitably there were always unused elements. Needless to say.

With individual queries, I get only what I need, and since the queries are so simple, I think it’s best to stick with this method.

This works for me, of course, in other situations, when individual queries are more complicated, I think that the method of storing shared data in an array will be more efficient.

+3
source share

There is no answer that is not case sensitive. You can really look at it in the case of a particular case. Having said that in most cases, it will be faster to receive all the data in one request, insert them into an array or object and refer to it from there.

The caveat is whether it is possible to pull out all your data that you need in one request as fast as running five separate ones. This is where the performance of the request itself comes into play.

Sometimes a query containing a subquery or two will actually be less efficient than running multiple queries separately.

My suggestion is to check this out. Get a request together, which will receive all the necessary data, see how long it takes to complete. Time each of the five other requests and see how much time they take together. If it is almost identical, paste the output into an array and it will be more efficient because you do not have to make frequent connections to the database itself.

If, however, your combined query takes longer to return data (this may lead to a full table scan instead of using indexes, for example), then stick to the individual ones.

Finally, if you intend to use the same data again and again, the array or object will win hands each time, since access to it will be much faster than getting it from the database.

+6
source share

Agree that everyone says here .. everything about the rooms.

Some additional tips:

  • Try creating a single memory array that contains the minimum you require. This means removing the most obvious redundancy.

  • There are standard approaches to these problems in critical environments, for example, using memcached with mysql. This is a bit overkill, but it basically allows you to allocate some external memory and cache your requests there. Since you choose how much memory you want to allocate, you can plan it depending on how much memory your system has.

  • Just play with numbers. Try using separate queries (this is the easiest approach) and underline your PHP script (for example, call it hundreds of times from the command line). Measure how long it takes, and see how big the performance loss actually is. Speaking from my personal experience, I usually cache everything in memory, and then one day when the data gets too big, I run out of memory. Then I smashed everything to separate requests in order to save memory, and make sure the performance impact wasn’t so bad :)

+3
source share

I'm with Fluffeh: consider other options at your disposal (joins, subqueries, make sure your indexes reflect the relativity of the data, but not over the index and test). Most likely, at some point you will get an array, so here's a little performance tip, contrary to what you might expect, things like

 $all = $stmt->fetchAll(PDO::FETCH_ASSOC); 

also smaller compared to memory:

 $all = array();//or $all = []; in php 5.4 while($row = $stmt->fetch(PDO::FETCH_ASSOC); { $all[] = $row['lang_string ']; } 

What else: you can check for redundant data while retrieving data.

+1
source share

My answer is to do something in between. Extract all lines for lang_id that are shorter than a certain length (say 100 characters). Shorter text strings are more likely to be used in several places than longer ones. Load the entries in a static associative array into get_lang_string (). If the item is not found, load it through the request.

+1
source share

I am currently in the site/application point where I had to set the brakes and monitor the speed very carefully. I think that the mentioned speed tests should take into account the amount of traffic on your server as an important variable that will affect the results. If you put data in javascript data structures and process it on the client machine, processing time should be more regular. If you request a lot of data via mysql via php (for example), this puts a demand on one computer / server, and does not apply to it. As your traffic grows, you have to share server resources with many users, and I think that it is here, when JavaScript does more, that will ease the load on the server. You can also store data on the local computer using localstorage.setItem(); / localstorage.getItem(); localstorage.setItem(); / localstorage.getItem(); (most browsers have about 5 mb of space per domain). If you have data in the database that does not change so often, you can save it to the client, and then just check for “launch” if it is still in the date / valid.

This is my first comment posted after I used the account for 1 year, so I might need to fine-tune my incoherence - just voicing what I'm thinking right now.

0
source share

All Articles