I tried to implement the problem of changing coins in Haskell. The problem is this:
Given the set of coins and the endless supply of these coins, find the minimum number of coins that will be required to make a certain value, say n.
I wrote the following program in haskell:
import Data.Vector ((!), generate) import Control.Applicative import Control.Monad coins = [1, 5, 10, 25, 100] :: [Int] coinchangev :: Int -> [Int] -> Int coinchangev n cs = v ! n where v = generate (n+1) f f 0 = 0 fn = 1 + minimum [v ! x | x <- map (n-) cs, x >= 0] main :: IO () main = do n <- read <$> getLine :: IO Int putStrLn . show $ coinchangev n coins
I wrote the following program in C ++:
#include <iostream> #include <vector> using namespace std; typedef vector<int> row; const row coins {1, 5, 10, 25, 100}; #define INT_MAX 9999999; int coinchange(const int n, const row& coins) { row v(n+1, 0); for (int i=1; i<n+1; i++) { int min = INT_MAX; for (int j=0; j<coins.size(); j++) { int x = i - coins[j]; if (x >= 0 && v[x] < min) { min = v[x]; } } v[i] = 1 + min; } return v.back(); } int main() { int n; cin >> n; cout << coinchange(n, coins) << endl; return 0; }
I compiled a haskell version with ghc -O3 (version 7.8.4) and a C ++ version with g++ --std=c++1y -O3 (version 4.8.4).
My haskell program takes much longer to enter 10000000 than my C ++ version. Here are the timings that were measured using time:
Haskell: 6.40user 0.85system 0:07.26elapsed 99%CPU (0avgtext+0avgdata 2664316maxresident)k 152inputs+0outputs (1major+647822minor)pagefaults 0swaps
C ++: 0.08user 0.00system 0:00.08elapsed 97%CPU (0avgtext+0avgdata 39964maxresident)k 0inputs+0outputs (0major+936minor)pagefaults 0swaps
My question is why I see such a big difference and what steps I can take (if any) to improve the performance of Haskell code. I expected the performance to be basically equivalent (at least in the same order). My goal was to keep the solution simple (and hopefully idiomatic) in both languages.