This is set in the dfs.datanode.data.dir property, which by default is file://${hadoop.tmp.dir}/dfs/data (see details here ).
However, in your case, the problem is that you are not using the full path in HDFS. Instead, run:
hadoop fs -ls /usr/local/myhadoop-tmp/
Note that it looks like you are also confusing the path in HDFS with the path in the local file system. In HDFS, your file is located in /usr/local/myhadoop-tmp/ . On your local system (and given the configuration settings) it is under /usr/local/myhadoop-tmp/dfs/data/ ; there is a directory structure and a naming convention defined by HDFS that does not depend on any path in HDFS that you decide to use. In addition, it will not have the same name, since it is divided into blocks, and a unique identifier is assigned to each block; the block name is similar to blk_1073741826 .
In conclusion: the local path used by this datanode does not match the paths you use in HDFS. You can go to your local directory looking for files, but you shouldn't, because you can ruin the management of HDFS metadata. Just use the hadoop command line tools to copy / move / read files in HDFS using any logical path (in HDFS) that you want to use. These paths in HDFS need not be tied to the paths that you used for local data storage (there is no reason or benefit to this).
cabad
source share