I have a haskell function that I want to evaluate using accurate intermediate results:
f 0 x = 0 fnx = let tmp = f (n-1) x in tmp + (x-tmp^2)/2
Due to (^ 2), the complexity grows exponentially in n. Since I want to make a plot, and the calculations for two different x are completely independent, I would expect an almost optimal acceleration from a parallel estimate. My code for this:
import Data.Ratio import Control.Parallel.Strategies f 0 x = 0 fnx = let tmp = f (n-1) x in tmp + (x-tmp^2)/2 main = do it <- readLn let fn = fromRational . f it values = map fn [0,1%2..10] :: [Double] computed = values `using` parBuffer 16 rseq mapM_ (putStrLn . show) computed
But, to my surprise, this really does not scale (on my dual core i3 with HT):
$ ghc -threaded -O f.hs [1 of 1] Compiling Main ( f.hs, fo ) Linking f ... $ time echo 20 | (./f +RTS -N1 > /dev/null) real 0m4.760s user 0m4.736s sys 0m0.016s $ time echo 20 | (./f +RTS -N2 > /dev/null) real 0m4.041s user 0m5.416s sys 0m2.548s $ time echo 20 | (./f +RTS -N3 > /dev/null) real 0m4.884s user 0m10.936s sys 0m3.464s $ time echo 20 | (./f +RTS -N4 > /dev/null) real 0m5.536s user 0m17.028s sys 0m3.888s
What am I doing wrong here? It seems like he spends some time in castles (sys?) Instead of doing useful work.
Tobias
source share