Finding min / max with pyspark in a single pass based on data

I have an RDD with a huge list of numbers (the length of lines from a file), I want to know how to get min / max in one pass through the data.

I know about the Min and Max functions, but this will require two passes.

+4
source share
2 answers

Try the following:

>>> from pyspark.statcounter import StatCounter
>>> 
>>> rdd = sc.parallelize([9, -1, 0, 99, 0, -10])
>>> stats = rdd.aggregate(StatCounter(), StatCounter.merge, StatCounter.mergeStats)
>>> stats.minValue, stats.maxValue
(-10.0, 99.0)
+7
source

There is still an inelegant solution using batteries. The inefficiency lies in the fact that you have to determine the zero / initial values ​​in front of the hand so that they do not interfere with the data:

from pyspark.accumulators import AccumulatorParam
class MinMaxAccumulatorParam(AccumulatorParam): 
    def zero(self, value): 
        return value
    def addInPlace(self, val1, val2): 
        return(min(val1[0],val2[0]), max(val1[1],val2[1]))

minmaxAccu = sc.accumulator([500,-500], MinMaxAccumulatorParam())

def g(x):
    global minmaxAccu
    minmaxAccu += (x,x)

rdd = sc.parallelize([1, 2, 3, 4, 5])

rdd.foreach(g)

In [149]: minmaxAccu.value
Out[149]: (1, 5)
+2
source

All Articles