ClassCastException in Drop table query in apache spark hive

I use the following bush request:

this.queryExecutor.executeQuery("Drop table user") 

and I get the following exception:

 java.lang.LinkageError: ClassCastException: attempting to castjar:file:/usr/hdp/2.4.2.0-258/spark/lib/spark-assembly-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar!/javax/ws/rs/ext/RuntimeDelegate.classtojar:file:/usr/hdp/2.4.2.0-258/spark/lib/spark-assembly-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar!/javax/ws/rs/ext/RuntimeDelegate.class at javax.ws.rs.ext.RuntimeDelegate.findDelegate(RuntimeDelegate.java:116) at javax.ws.rs.ext.RuntimeDelegate.getInstance(RuntimeDelegate.java:91) at javax.ws.rs.core.MediaType.<clinit>(MediaType.java:44) at com.sun.jersey.core.header.MediaTypes.<clinit>(MediaTypes.java:64) at com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:182) at com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:175) at com.sun.jersey.core.spi.factory.MessageBodyFactory.init(MessageBodyFactory.java:162) at com.sun.jersey.api.client.Client.init(Client.java:342) at com.sun.jersey.api.client.Client.access$000(Client.java:118) at com.sun.jersey.api.client.Client$1.f(Client.java:191) at com.sun.jersey.api.client.Client$1.f(Client.java:187) at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193) at com.sun.jersey.api.client.Client.<init>(Client.java:187) at com.sun.jersey.api.client.Client.<init>(Client.java:170) at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceInit(TimelineClientImpl.java:340) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.hive.ql.hooks.ATSHook.<init>(ATSHook.java:67) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at java.lang.Class.newInstance(Class.java:442) at org.apache.hadoop.hive.ql.hooks.HookUtils.getHooks(HookUtils.java:60) at org.apache.hadoop.hive.ql.Driver.getHooks(Driver.java:1309) at org.apache.hadoop.hive.ql.Driver.getHooks(Driver.java:1293) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1347) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049) at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:495) at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:484) at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:290) at org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:237) at org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:236) at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:279) at org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:484) at org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:474) at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:613) at org.apache.spark.sql.hive.execution.DropTable.run(commands.scala:89) at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55) at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145) at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130) at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817) at com.accenture.aa.dmah.spark.core.QueryExecutor.executeQuery(QueryExecutor.scala:35) at com.accenture.aa.dmah.attribution.transformer.MulltipleUserJourneyTransformer.transform(MulltipleUserJourneyTransformer.scala:32) at com.accenture.aa.dmah.attribution.userjourney.UserJourneyBuilder$$anonfun$buildUserJourney$1.apply$mcVI$sp(UserJourneyBuilder.scala:31) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at com.accenture.aa.dmah.attribution.userjourney.UserJourneyBuilder.buildUserJourney(UserJourneyBuilder.scala:29) at com.accenture.aa.dmah.attribution.core.AttributionHub.executeAttribution(AttributionHub.scala:47) at com.accenture.aa.dmah.attribution.jobs.AttributionJob.process(AttributionJob.scala:33) at com.accenture.aa.dmah.core.DMAHJob.processJob(DMAHJob.scala:73) at com.accenture.aa.dmah.core.DMAHJob.execute(DMAHJob.scala:27) at com.accenture.aa.dmah.core.JobRunner.<init>(JobRunner.scala:17) at com.accenture.aa.dmah.core.ApplicationInstance.initilize(ApplicationInstance.scala:48) at com.accenture.aa.dmah.core.Bootstrap.boot(Bootstrap.scala:112) at com.accenture.aa.dmah.core.BootstrapObj$.main(Bootstrap.scala:134) at com.accenture.aa.dmah.core.BootstrapObj.main(Bootstrap.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at scala.tools.nsc.util.ScalaClassLoader$$anonfun$run$1.apply(ScalaClassLoader.scala:71) at scala.tools.nsc.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31) at scala.tools.nsc.util.ScalaClassLoader$URLClassLoader.asContext(ScalaClassLoader.scala:139) at scala.tools.nsc.util.ScalaClassLoader$class.run(ScalaClassLoader.scala:71) at scala.tools.nsc.util.ScalaClassLoader$URLClassLoader.run(ScalaClassLoader.scala:139) at scala.tools.nsc.CommonRunner$class.run(ObjectRunner.scala:28) at scala.tools.nsc.ObjectRunner$.run(ObjectRunner.scala:45) at scala.tools.nsc.CommonRunner$class.runAndCatch(ObjectRunner.scala:35) at scala.tools.nsc.ObjectRunner$.runAndCatch(ObjectRunner.scala:45) at scala.tools.nsc.MainGenericRunner.runTarget$1(MainGenericRunner.scala:74) at scala.tools.nsc.MainGenericRunner.process(MainGenericRunner.scala:96) at scala.tools.nsc.MainGenericRunner$.main(MainGenericRunner.scala:105) at scala.tools.nsc.MainGenericRunner.main(MainGenericRunner.scala) 

I saw similar entries here and here , but they had no answer until now. Also looked here , but do not think that this is the right course of action in my case.

What is intriguing is that it is specific when we try to use the drop table (or drop table if exists) query.

Hoping to find permission for the same.

+5
source share
1 answer

As far as I know, the above error may be due to the sample class with the same package structure, that is: "javax.ws.rs.ext.RuntimeDelegate" found in different JAR problems. Class objects are created and executed at runtime. Thus, there is all the possibilities of the code responsible for running the DROP syntax, the specified class will be used and interrupted, since it occurs more than once in the class path.

I tried DROP and DROP IF EXISTS in chd5 and worked without problems, the following are the details of my launch:

first launch - version Hadoop - 2.6, Hive 1.1.0 and Spark - 1.3.1 (included hive libraries for spark lib) second launch - version Hadoop - 2.6, Hive 1.1.0 and Spark - 1.6.1 run-cli mode

 scala> sqlContext.sql("DROP TABLE SAMPLE"); 16/08/04 11:31:39 INFO parse.ParseDriver: Parsing command: DROP TABLE SAMPLE 16/08/04 11:31:39 INFO parse.ParseDriver: Parse Completed ...... scala>sqlContext.sql("DROP TABLE IF EXISTS SAMPLE"); 16/08/04 11:40:34 INFO parse.ParseDriver: Parsing command: DROP TABLE IF EXISTS SAMPLE 16/08/04 11:40:35 INFO parse.ParseDriver: Parse Completed ..... 

If possible, please check out the DROP commands using a different version of the spark lib library to narrow down the problem area.

Meanwhile, I am analyzing the banks to find out the connection in which there are two events of the same class “RuntimeDelegate”, and will report to check if removing any banks can fix the problem and add the bank to recreate the same problem.

0
source

All Articles