Can't reach dd speed

I am writing C code with some limitations in real time. I tested the speed that I can write to disk using dd:

dd if = / dev / zero of = / dev / sdb bs = 32K count = 32768 oflag = direct

This writes 1GB of zeros to / dev / sdb in 32K block sizes

I reach about 103 MB / s with this

Now I am doing something similar programmatically:

open("/dev/sdb",O_WRONLY|O_CREAT|O_DIRECT|O_TRUNC, 0666); 

I get the timestamp value write from the 32K buffer to / dev / sdb 10,000 times (in the for loop) to get a different timestamp value do a few crunches to get the speed in MB / s, and this is about 49 MB / s

Why can't I reach the same speed as dd? The same open command that I use appears in strace.

+4
source share
2 answers

Check what the dd system calls are doing, not just the open ones, as well as the subsequent read and writes . Using the right buffer sizes can significantly affect this large copy. Note that /dev/zero not a good benchmarking test if your ultimate goal is to copy a disk to disk.

If you cannot match the dd speed by matching it with a system call for a system call ... well, read the source.

+5
source

I leave the part regarding matching system calls to someone else. This answer relates to the buffer part.

Try comparing the size of the buffer you are using. Experiment with a range of values.

While learning Java, I wrote a simple clone of "copy" and then tried to match its speed. Since the code read / write the size of the buffer byte by byte, this really made a difference. I myself do not buffer, but I asked the reading to read pieces of a certain size. The larger the piece, the faster it reached the point.

Regarding the use of 32K block size, remember that the OS still uses separate I / O buffers for user-mode processes. Even if you are doing something with certain equipment, i.e. Write a driver for a device that has some physical limitations, for example. CD-RW with sector sizes, block size is only part of the story. The OS will still have its own buffer.

0
source

All Articles