Yes it's true,
I use spark version 1.6.0 and you need a HiveContext to implement the dense_rank method.
From Spark 2.0.0 in words, there will no longer be a dense_rank method.
So, for Spark 1.4.1.6 <2.0 you should apply this as.
hive_employees table having three fields :: Location: String, name: String, salary: Int
val conf = new SparkConf (). setAppName ("denseRank test") // setMaster ("local")
val sc = new SparkContext(conf) val sqlContext = new SQLContext(sc) val hqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
val result = hqlContext.sql ("select empid, empname, dense_rank () over (section by empsalary order by empname) as rank from hive_employees")
result.show ()
Pelab
source share