I used Oliver Crowe's code (a link given by Andrew Hare) and adapted it a bit to adapt Python 2.7.3. (using the timeit package). I ran on my personal computer, Lenovo T61, RAM 6 GB, Debian GNU / Linux 6.0.6 (compression).
Here is the result for 10,000 iterations:
method1: 0.0538418292999 secs
process size 4800 kb
method2: 0.22602891922 secs
process size 4960 kb
method3: 0.0605459213257 secs
process size 4980 kb
method4: 0.0544030666351 secs
process size 5536 kb
method5: 0.0551080703735 secs
process size 5272 kb
method6: 0.0542731285095 secs
process size 5512 kb
and for 5,000,000 iterations (method 2 was ignored because it slowly started slowly, like forever):
method1: 5.88603997231 secs
process size 37976 kb
method3: 8.40748500824 secs
process size 38024 kb
method4: 7.96380496025 secs
process size 321968 kb
method5: 8.03666186333 secs
process size 71720 kb
method6: 6.68192911148 secs
process size 38240 kb
It’s clear that the Python guys did a great job of optimizing string concatenation, and, as Hoar said, “premature optimization is the root of all evil” :-)
Antoine-tran Nov 12 2018-12-12T00: 00Z
source share