Enable Hadoop DFS at job start

I get the following permission error and don’t know why hasoop is trying to write this folder:

hadoop jar /usr/lib/hadoop/hadoop-*-examples.jar pi 2 100000 Number of Maps = 2 Samples per Map = 100000 Wrote input for Map #0 Wrote input for Map #1 Starting Job org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=myuser, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x 

Any idea why she is trying to write to the root of my hdfs?

Update:. After temporarily setting the hdfs root (/) permissions to 777, I saw that the "/ tmp" folder was being written. I suggest that one option is to simply create the / tmp folder with open permissions for everyone to write, but it would be nice from a security point of view if it is written to a user folder instead (for example, / user / myuser / tmp)

+8
hadoop permissions
source share
4 answers

I managed to get this working with the following setup:

 <configuration> <property> <name>mapreduce.jobtracker.staging.root.dir</name> <value>/user</value> </property> #... </configuration> 

The jobtracker service needs to be restarted (special thanks to Jeff for the Hadoop mailing list for help fixing the problem!)

+15
source share

1) Create the directory {mapred.system.dir} / mapred in hdfs using the following command

 sudo -u hdfs hadoop fs -mkdir /hadoop/mapred/ 

2) Allow the mapped user

 sudo -u hdfs hadoop fs -chown mapred:hadoop /hadoop/mapred/ 
+1
source share

You can also create a new user named "hdfs". Pretty simple solution, but not so clean.

Of course, this is when you use Hue with the Cloudera Hadoop Manager (CDH3)

0
source share

You need to set permission for the hadoop root directory (/) instead of setting permission for the system root directory. Even I was confused, but then I realized that the directory mentioned refers to the hadoop file system, and not to the system.

0
source share

All Articles