The throughput of arrays of different sizes varies

I have arrays like

byte[] b = new byte[10]; byte[] b1 = new byte[1024*1024]; 

I fill them with some values. Let's say

 for(i=0;i<10;i++){ b[i]=1; } for(i=0;i<1024*1024;i++){ b1[i]=1; } 

Then I write it to RandomAccessFile and again read from this file into the same array using

 randomAccessFile.write(arrayName); and randomAccessFile.read(arrayName); 

When I try to calculate the throughput of both of these arrays (using the time calculated for reading and writing files) of different sizes (10 bytes and 1 MB), the throughput seems to be larger for the 1 MB array.

 Sample Output: Throughput of 10kb array: 0.1 Mb/sec. Throughput of 1Mb array: 1000.0 Mb/sec. 

Why is this happening? I have an Intel i7 with a quad core processor. Will my hardware configuration depend? If it could not be a possible cause?

+4
source share
1 answer

The reason for the big difference is the overhead associated with I / O, which occurs regardless of how much data is transferred - this is like a drop in the fall of a taxi. Overhead, which is not limited to java and includes many O / S operations, include:

  • Search for a file on disk
  • Check O / S permissions on a file
  • Opening a file for input / output
  • File closing
  • Updating file information in the file system
  • Many other tasks

In addition, disk I / O is performed on pages (the size depends on O / S, but usually 2K), so I / O of 1 byte probably costs the same as I / O on 2048 bytes: a slightly more fair comparison be a 2048 byte array with a 1 MB array.

If you use buffered I / O, this can speed up the execution of large I / O tasks.


Finally, what you are reporting as β€œ10Kb” is actually only 10 bytes, so your calculation may be incorrect.

+3
source

All Articles