As others have said, there is no universally correct block size; what is optimal for one situation, or one piece of hardware can be terribly inefficient for another. In addition, depending on the health of the disks, it may be preferable to use a different block size than the one that is βoptimalβ.
One thing that is pretty reliable on modern hardware is that the default block size of 512 bytes is usually almost an order of magnitude slower than the more optimal alternative. When in doubt, I found that 64K is a pretty solid modern default. Although 64K is usually not the optimal block size, in my experience, it is usually much more efficient than the default. 64K also has a pretty solid history of reliability: you can find a message from the Eug-Lug mailing list around 2002, recommending a 64K block size here: http://www.mail-archive.com/eug-lug@efn.org/msg12073 .html
To determine the optimal size of the output block, I wrote the following script, which tests the recording of a test file 128M with dd in different block sizes by default from 512 bytes to a maximum of 64M. Be careful, this script uses dd internally, so use with caution.
dd_obs_test.sh:
View on GitHub
I tested only this script on Debian (Ubuntu) and OSX Yosemite, so it may need some improvement to work with other Unix accessories.
By default, the command will create a test file named dd_obs_testfile in the current directory. In addition, you can specify the path to the user test file by specifying the path after the script name:
$ ./dd_obs_test.sh /path/to/disk/test_file
The script output is a list of measured block sizes and their transfer like this:
$ ./dd_obs_test.sh block size : transfer rate 512 : 11.3 MB/s 1024 : 22.1 MB/s 2048 : 42.3 MB/s 4096 : 75.2 MB/s 8192 : 90.7 MB/s 16384 : 101 MB/s 32768 : 104 MB/s 65536 : 108 MB/s 131072 : 113 MB/s 262144 : 112 MB/s 524288 : 133 MB/s 1048576 : 125 MB/s 2097152 : 113 MB/s 4194304 : 106 MB/s 8388608 : 107 MB/s 16777216 : 110 MB/s 33554432 : 119 MB/s 67108864 : 134 MB/s
(Note: the unit of transmission rate depends on the OS)
To check the optimal read block size, you can use more or less the same process, but instead of reading from / dev / zero and writing to disk, you read from disk and write to / dev / null. A script for this might look like this:
dd_ibs_test.sh:
View on GitHub
An important difference in this case is that the test file is a file written by a script. Do not specify this command in an existing file, or the existing file will be overwritten with zeros!
For my specific hardware, I found that 128K was the most optimal input block size on the hard drive, and 32K was the most optimal size on the SSD.
Although this answer covers most of my conclusions, I often came across this situation when I wrote about this on my blog: http://blog.tdg5.com/tuning-dd-block-size/ You can find more details about tests that I spent there.