Guava Release 1635 has been discovered that indicates that a version of Guava less than 16.01 is being used.

I am running a spark job on emr and using the datastax connector to connect to the cassandra cluster. I am facing problems with a guava jar, please find out the details below. I use below cassandra deps

cqlsh 5.0.1 | Cassandra 3.0.1 | CQL spec 3.3.1 

Performing a spark job on EMP 4.4 using below maven deps

  org.apache.spark spark streaming_2.10 1.5.0

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-core_2.10</artifactId>
    <version>1.5.0</version>
</dependency>

<dependency>
    <groupId>org.apache.spark</groupId><dependency>
    <groupId>com.datastax.spark</groupId>
    <artifactId>spark-cassandra-connector_2.10</artifactId>
    <version>1.5.0</version>
</dependency>

    <artifactId>spark-streaming-kinesis-asl_2.10</artifactId>
    <version>1.5.0</version>
</dependency>

run into problems when I submit the spark task below

ava.lang.ExceptionInInitializerError
       at com.datastax.spark.connector.cql.DefaultConnectionFactory$.clusterBuilder(CassandraConnectionFactory.scala:35)
       at com.datastax.spark.connector.cql.DefaultConnectionFactory$.createCluster(CassandraConnectionFactory.scala:87)
       at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:153)
       at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148)
       at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148)
       at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31)
      at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56)
       at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:81)
       at ampush.event.process.core.CassandraServiceManagerImpl.getAdMetaInfo(CassandraServiceManagerImpl.java:158)
       at ampush.event.config.metric.processor.ScheduledEventAggregator$4.call(ScheduledEventAggregator.java:308)
       at ampush.event.config.metric.processor.ScheduledEventAggregator$4.call(ScheduledEventAggregator.java:290)
       at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:222)
       at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:222)
       at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:902)
       at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:902)
       at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
       at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
       at org.apache.spark.scheduler.Task.run(Task.scala:88)
       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
       at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
       at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
       at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Detected Guava issue #1635 which indicates that a version of Guava less than 16.01 is in use.  This introduces codec resolution issues and potentially other incompatibility issues in the driver.  Please upgrade to Guava 16.01 or later.
       at com.datastax.driver.core.SanityChecks.checkGuava(SanityChecks.java:62)
       at com.datastax.driver.core.SanityChecks.check(SanityChecks.java:36)
       at com.datastax.driver.core.Cluster.<clinit>(Cluster.java:67)
       ... 23 more

Please let me know how to manage guava folders here?

thank

+4
source share
5 answers

Another solution, go to the directory

spark / cans

. guava-14.0.1.jar, guava-19.0.jar, :

enter image description here

+5

, , maven Shade, guava, Cassandra.

Optional, Present Absent , Spark, - Guava Present . , - , , , .

<plugins> pom.xml:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-shade-plugin</artifactId>
    <version>2.4.3</version>
    <executions>
        <execution>
            <phase>package</phase>
            <goals>
                <goal>
                    shade
                </goal>
            </goals>
        </execution>
    </executions>

    <configuration>
        <minimizeJar>true</minimizeJar>
        <shadedArtifactAttached>true</shadedArtifactAttached>
        <shadedClassifierName>fat</shadedClassifierName>

        <relocations>
            <relocation>
                <pattern>com.google</pattern>
                <shadedPattern>shaded.guava</shadedPattern>
                <includes>
                    <include>com.google.**</include>
                </includes>

                <excludes>
                    <exclude>com.google.common.base.Optional</exclude>
                    <exclude>com.google.common.base.Absent</exclude>
                    <exclude>com.google.common.base.Present</exclude>
                </excludes>
            </relocation>
        </relocations>

        <filters>
            <filter>
                <artifact>*:*</artifact>
                <excludes>
                    <exclude>META-INF/*.SF</exclude>
                    <exclude>META-INF/*.DSA</exclude>
                    <exclude>META-INF/*.RSA</exclude>
                </excludes>
            </filter>
        </filters>

    </configuration>
</plugin>
+2

POM <dependencies> - :

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>19.0</version>
</dependency>

( > 16.0.1, )

+1

, guava 16.0.1, Spark submit :

- conf "spark.driver.extraClassPath =/guava-16.0.1.jar" --conf "spark.executor.extraClassPath =/guava-16.0.1.jar"

, - !

0

.

, , Guava . 2.2 . sbt-native-packager .

, , , . .

build.sbt

....
libraryDependencies ++= Seq(
  "com.google.guava" % "guava" % "19.0" force(),
  "org.apache.hadoop" % "hadoop-aws" % "2.7.3" excludeAll (
    ExclusionRule(organization = "org.apache.hadoop", name = "hadoop-common"), //this is for s3a
    ExclusionRule(organization = "com.google.guava",  name= "guava" )),
  "org.apache.spark" %% "spark-core" % "2.1.0"   excludeAll (
    ExclusionRule("org.glassfish.jersey.bundles.repackaged", name="jersey-guava"),
    ExclusionRule(organization = "com.google.guava",  name= "guava" )) ,
  "com.github.scopt" %% "scopt" % "3.7.0"  excludeAll (
    ExclusionRule("org.glassfish.jersey.bundles.repackaged", name="jersey-guava"),
    ExclusionRule(organization = "com.google.guava",  name= "guava" )) ,
  "com.datastax.spark" %% "spark-cassandra-connector" % "2.0.6",
...
dockerCommands ++= Seq(
...
  Cmd("RUN rm /opt/spark/dist/jars/guava-14.0.1.jar"),
  Cmd("RUN wget -q http://central.maven.org/maven2/com/google/guava/guava/23.0/guava-23.0.jar  -O /opt/spark/dist/jars/guava-23.0.jar")
...

guava 14 guava 16.0.1 19, . Spark submit . , , , 19, , 23. 16 19, .

Sorry for the redirect, but every time after all the search queries, Google has come every time. Hope this helps other SBT / mesos people.

0
source

All Articles