The fsck
commands in the other answers list the blocks and let you see the number of blocks. However, to see the actual block size in bytes at no extra cost, do:
hadoop fs -stat %o /filename
Default block size:
hdfs getconf -confKey dfs.blocksize
Unit Details
Unit sizes are not documented in the hadoop fs -stat
, however, looking at the source line and documents using the hadoop fs -stat
method, we see that it uses bytes and cannot report block sizes greater than 9 exabytes.
The units for the hdfs getconf
cannot be bytes. It returns any string used for dfs.blocksize
in the configuration file. (This is seen in the source for the final function and its indirect caller )
Eponymous
source share