PDO :: fetchAll vs. PDO :: fetch in loop

Just a quick question.

Is there a performance difference between using PDO :: fetchAll () and PDO :: fetch () in a loop (for large result sets)?

I'm going to custom class objects, if that matters.

My initial uneducated assumption was that fetchAll could be faster because PDO can perform several operations in one of the statements, while mysql_query can only do one. However, I have little knowledge of the internal workings of PDO, and the documentation says nothing about this, and whether fetchAll () is just a PHP side loop dumped into an array.

Any help?

+59
php mysql pdo fetch
May 05 '10 at 4:31 a.m.
source share
7 answers

A small test with 200k random entries. As expected, the fetchAll method is faster, but requires more memory.

Result : fetchAll : 0.35965991020203s, 100249408b fetch : 0.39197015762329s, 440b 

Base Code Used:

 <?php // First benchmark : speed $dbh = new PDO('mysql:dbname=testage;dbhost=localhost', 'root', ''); $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $sql = 'SELECT * FROM test_table WHERE 1'; $stmt = $dbh->query($sql); $data = array(); $start_all = microtime(true); $data = $stmt->fetchAll(); $end_all = microtime(true); $stmt = $dbh->query($sql); $data = array(); $start_one = microtime(true); while($data = $stmt->fetch()){} $end_one = microtime(true); // Second benchmark : memory usage $stmt = $dbh->query($sql); $data = array(); $memory_start_all = memory_get_usage(); $data = $stmt->fetchAll(); $memory_end_all = memory_get_usage(); $stmt = $dbh->query($sql); $data = array(); $memory_end_one = 0; $memory_start_one = memory_get_usage(); while($data = $stmt->fetch()){ $memory_end_one = max($memory_end_one, memory_get_usage()); } echo 'Result : <br/> fetchAll : ' . ($end_all - $start_all) . 's, ' . ($memory_end_all - $memory_start_all) . 'b<br/> fetch : ' . ($end_one - $start_one) . 's, ' . ($memory_end_one - $memory_start_one) . 'b<br/>'; 
+66
May 05 '10 at 8:25
source share

One thing about PHP that I find almost always true is that a function that you implement yourself will almost always be slower than the equivalent of PHP. This is due to the fact that when something is implemented in PHP, it does not have all the compilation time optimizations that C has (which PHP is written in), and there is high overhead for calling PHP functions.

+9
May 05 '10 at 4:39 a.m.
source share

@Arkh

 // $data in this case is an array of rows; $data = $stmt->fetchAll(); // $data in this case is just one row after each loop; while($data = $stmt->fetch()){} // Try using $i = 0; while($data[$i++] = $stmt->fetch()){} 

The difference in memory should become unreadable.

+8
Jun 03 '10 at 16:12
source share

As Mihai Stanku said, there is practically no memory difference, although fetchAll beats fetch + while.

 Result : fetchAll : 0.160676956177s, 118539304b fetch : 0.121752023697s, 118544392b 

I got the above results with the correct operation:

 $i = 0; while($data[$i++] = $stmt->fetch()){ // } 

So fetchAll consumes less memory, but fetch + while is faster! :)

+4
Dec 24 '10 at 23:12
source share

But of course, if you store the extracted data in an array, will the memory usage be equal?

 <?php define('DB_HOST', 'localhost'); define('DB_USER', 'root'); define('DB_PASS', ''); // database to use define('DB', 'test'); try { $dbh = new \PDO('mysql:dbname='. DB .';host='. DB_HOST, DB_USER, DB_PASS); $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $sql = 'SELECT * FROM users WHERE 1'; $stmt = $dbh->query($sql); $data = array(); $start_all = microtime(true); $data = $stmt->fetchAll(); $end_all = microtime(true); $stmt = $dbh->query($sql); $data = array(); $start_one = microtime(true); while($data = $stmt->fetch()){} $end_one = microtime(true); // Second benchmark : memory usage $stmt = $dbh->query($sql); $data = array(); $memory_start_all = memory_get_usage(); $data = $stmt->fetchAll(); $memory_end_all = memory_get_usage(); $stmt = $dbh->query($sql); $data = array(); $memory_end_one = 0; $memory_start_one = memory_get_usage(); while($data[] = $stmt->fetch()){ $memory_end_one = max($memory_end_one, memory_get_usage()); } echo 'Result : <br/> fetchAll : ' . ($end_all - $start_all) . 's, ' . ($memory_end_all - $memory_start_all) . 'b<br/> fetch : ' . ($end_one - $start_one) . 's, ' . ($memory_end_one - $memory_start_one) . 'b<br/>'; } catch ( PDOException $e ) { echo $e->getMessage(); } ?> Result : fetchAll : 2.6941299438477E-5s, 9824b fetch : 1.5974044799805E-5s, 9824b 
+3
May 28 '12 at 13:04
source share

all benchmarks above which "memory size" is measured are actually incorrect for the simplest reason.

PDO by default loads all things into memory, and anyway, if you use fetch or fetchAll. To really get the benefits of an unbuffered request, you should instruct the PDO to use unbuffered requests:

$db->setAttribute(PDO::MYSQL_ATTR_USE_BUFFERED_QUERY, false);

In this case, you will see a huge difference in script memory

+3
Mar 01 '17 at 14:44
source share

I know this is an old topic, but I come across this same issue. By running my simple β€œtest” and reading what others wrote there, I came to the conclusion that this is not an exact science, and although you should strive to write high-quality, easy code, it makes no sense to spend too much time at the beginning of the project.

My suggestion is to collect data by running the code (in beta?) For a while, and then start optimizing.

In my simple testing (only tested runtime) I have results ranging from 5% to 50%. I run both parameters in the same script, but when I run fetch +, at first it is faster than fetchall and vice versa. (I know that I had to run them one and a couple of hundred times to get the median and average, and then compare, but, as I said at the beginning, I came to the conclusion that in my case it is too early to start doing it.)

+1
May 2 '12 at 23:43
source share



All Articles