Replace the local drive of the HDFS form with an s3 error (org.apache.hadoop.service.AbstractService)

We are trying to configure Cloudera 5.5, where HDFS will work with s3 only so that we have already configured the necessary properties in Core-site.xml

<property> <name>fs.s3a.access.key</name> <value>################</value> </property> <property> <name>fs.s3a.secret.key</name> <value>###############</value> </property> <property> <name>fs.default.name</name> <value>s3a://bucket_Name</value> </property> <property> <name>fs.defaultFS</name> <value>s3a://bucket_Name</value> </property> 

After setting up, we were able to view the files for the s3 bucket from the command

 hadoop fs -ls / 

And it shows only files available only on s3.

But when we start yarn services, the JobHistory server does not start with an error below and when we start jobs for pigs, we get the same error

 PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a ERROR org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils Unable to create default file context [s3a://kyvosps] org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:154) at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:337) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) 

When starting on the Internet, we found that we also need to set the following properties in the core-site.xml file

 <property> <name>fs.s3a.impl</name> <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value> <description>The implementation class of the S3A Filesystem</description> </property> <property> <name>fs.AbstractFileSystem.s3a.impl</name> <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value> <description>The FileSystem for S3A Filesystem</description> </property> 

After setting the above properties, we get the following error

 org.apache.hadoop.service.AbstractService Service org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager failed in state INITED; cause: java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.fs.s3a.S3AFileSystem.<init>(java.net.URI, org.apache.hadoop.conf.Configuration) java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.fs.s3a.S3AFileSystem.<init>(java.net.URI, org.apache.hadoop.conf.Configuration) at org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:131) at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:157) at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:337) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:334) at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:451) at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:473) at org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils.getDefaultFileContext(JobHistoryUtils.java:247) 

The banks needed for this are in place, but still getting the error, any help would be great. thanks in advance

Update

I tried to remove the fs.AbstractFileSystem.s3a.impl property, but it gives me the same first exception that I received earlier:

 org.apache.hadoop.security.UserGroupInformation PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a ERROR org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils Unable to create default file context [s3a://bucket_name] org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:154) at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:337) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:334) at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:451) at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:473) 
+6
source share
1 answer

The problem is not the location of the cans.

The problem is setting up:

 <property> <name>fs.AbstractFileSystem.s3a.impl</name> <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value> <description>The FileSystem for S3A Filesystem</description> </property> 

This parameter is not required. Because of this parameter, it searches for the following constructor in the S3AFileSystem class, and there is no such constructor:

 S3AFileSystem(URI theUri, Configuration conf); 

The following exception explicitly indicates that it cannot find a constructor for S3AFileSystem with URI and Configuration parameters.

 java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.fs.s3a.S3AFileSystem.<init>(java.net.URI, org.apache.hadoop.conf.Configuration) 

To resolve this issue, remove the fs.AbstractFileSystem.s3a.impl parameter from core-site.xml . Just installing fs.s3a.impl in core-site.xml should solve your problem.

EDIT: org.apache.hadoop.fs.s3a.S3AFileSystem just implements FileSystem .

Therefore, you cannot set the value fs.AbstractFileSystem.s3a.impl to org.apache.hadoop.fs.s3a.S3AFileSystem since org.apache.hadoop.fs.s3a.S3AFileSystem does not implement AbstractFileSystem .

I am using Hadoop 2.7.0 and in this version s3A does not display as AbstractFileSystem .

There is a JIRA ticket: https://issues.apache.org/jira/browse/HADOOP-11262 to implement the same, and a fix is ​​available in Hadoop 2.8.0.

Assuming your jar set s3A as AbstractFileSystem , you need to set the following for fs.AbstractFileSystem.s3a.impl :

 <property> <name>fs.AbstractFileSystem.s3a.impl</name> <value>org.apache.hadoop.fs.s3a.S3A</value> </property> 

This will solve your problem.

+5
source

All Articles