I tried to reproduce the example from [Databricks] [1] and apply it to the new connector for Kafka and structured streaming, however I can not parse JSON correctly using ready-made methods in Spark ...
Note: The theme is written in Kafka in JSON format.
val ds1 = spark .readStream .format("kafka") .option("kafka.bootstrap.servers", IP + ":9092") .option("zookeeper.connect", IP + ":2181") .option("subscribe", TOPIC) .option("startingOffsets", "earliest") .option("max.poll.records", 10) .option("failOnDataLoss", false) .load()
The following code will not work, I believe the json column is a string and does not match the from_json signature method ...
val df = ds1.select($"value" cast "string" as "json") .select(from_json("json") as "data") .select("data.*")
Any tips?
[UPDATE] Work example: https://github.com/katsou55/kafka-spark-structured-streaming-example/blob/master/src/main/scala-2.11/Main.scala
scala apache-spark apache-kafka structured-streaming apache-kafka-connect spark-structured-streaming
carlos rodrigues
source share