Implementing the fft algorithm with hadoop

I want to implement Fast Fastier Transform algorithm with Hadoop. I know the recursive fft algorithm, but I need your guide to implement its Map / Reduce approach. Any suggestions?

Thanks.

+2
source share
2 answers

I have a preliminary solution here:

http://blog.jierenchen.net/2010/08/fft-with-mapreduce.html

I have not tried to code this, so I'm not sure if it works 100%. Lemme know that I made some kind of swamps.

+1
source

To use MapReduce for calculating large-scale FFTs, it was considered in detail in [1]. Corresponding presentation slides are available in [2]. The source code for the Hadoop implementation is available in [3].

[1] SchΓΆnhage-Strassen Algorithm with MapReduce for Terabit Integer Multiplication (SNC2011)

[2] http://www.slideshare.net/hortonworks/large-scale-math-with-hadoop-mapreduce

[3] https://issues.apache.org/jira/browse/MAPREDUCE-2471

+1
source

All Articles