Here is the code I'm trying to do for reduceByKey:
import org.apache.spark.rdd.RDD import org.apache.spark.SparkContext._ import org.apache.spark.SparkContext import scala.math.random import org.apache.spark._ import org.apache.spark.storage.StorageLevel object MapReduce { def main(args: Array[String]) { val sc = new SparkContext("local[4]" , "") val file = sc.textFile("c:/data-files/myfile.txt") val counts = file.flatMap(line => line.split(" ")) .map(word => (word, 1)) .reduceByKey(_ + _) } }
Provides a compiler error: "cannot resolve the reduceByKey character"
When I hover over the reduceByKey implementation, it gives three possible implementations, so it seems like it was found ?:

scala intellij-idea apache-spark
blue-sky
source share