A kind of kernel that ghc-7.6.1 produces for foldDigits (with -O2 ),
Rec { $wfoldDigits_r2cK :: forall a_aha. (a_aha -> GHC.Types.Int -> a_aha) -> a_aha -> GHC.Prim.Int# -> a_aha [GblId, Arity=3, Caf=NoCafRefs, Str=DmdType C(C(S))SL] $wfoldDigits_r2cK = \ (@ a_aha) (w_s284 :: a_aha -> GHC.Types.Int -> a_aha) (w1_s285 :: a_aha) (ww_s288 :: GHC.Prim.Int#) -> case w1_s285 of acc_Xhi { __DEFAULT -> let { ds_sNo [Dmd=Just D(D(T)S)] :: (GHC.Types.Int, GHC.Types.Int) [LclId, Str=DmdType] ds_sNo = case GHC.Prim.quotRemInt# ww_s288 10 of _ { (# ipv_aJA, ipv1_aJB #) -> (GHC.Types.I# ipv_aJA, GHC.Types.I# ipv1_aJB) } } in case w_s284 acc_Xhi (case ds_sNo of _ { (d_arS, m_Xsi) -> m_Xsi }) of i_ahg { __DEFAULT -> case GHC.Prim.<# ww_s288 10 of _ { GHC.Types.False -> case ds_sNo of _ { (d_Xsi, m_Xs5) -> case d_Xsi of _ { GHC.Types.I# ww1_X28L -> $wfoldDigits_r2cK @ a_aha w_s284 i_ahg ww1_X28L } }; GHC.Types.True -> i_ahg } } } end Rec }
which, as you see, re-displays the result of calling quotRem . The problem is that there is no property f , and as a recursive function foldDigits cannot be embedded.
By manually transforming the working shell, making the function argument static,
foldDigits :: (a -> Int -> a) -> a -> Int -> a foldDigits f = go where go !acc 0 = acc go acc n = case n `quotRem` 10 of (q,r) -> go (f acc r) q
foldDigits becomes integral, and you get specialized versions for your applications that work with uncompressed data, but without the top level foldDigits , for example
Rec { $wgo_r2di :: GHC.Prim.Int
and the impact on the calculation time is tangible, for the original I got
$ ./eul145 +RTS -s -N2 608720 1,814,289,579,592 bytes allocated in the heap 196,407,088 bytes copied during GC 47,184 bytes maximum residency (2 sample(s)) 30,640 bytes maximum slop 2 MB total memory in use (0 MB lost due to fragmentation) Tot time (elapsed) Avg pause Max pause Gen 0 1827331 colls, 1827331 par 23.77s 11.86s 0.0000s 0.0041s Gen 1 2 colls, 1 par 0.00s 0.00s 0.0001s 0.0001s Parallel GC work balance: 54.94% (serial 0%, perfect 100%) TASKS: 4 (1 bound, 3 peak workers (3 total), using -N2) SPARKS: 4 (3 converted, 0 overflowed, 0 dud, 0 GC'd, 1 fizzled) INIT time 0.00s ( 0.00s elapsed) MUT time 620.52s (313.51s elapsed) GC time 23.77s ( 11.86s elapsed) EXIT time 0.00s ( 0.00s elapsed) Total time 644.29s (325.37s elapsed) Alloc rate 2,923,834,808 bytes per MUT second
(I used -N2 , since my i5 has only two physical cores), vs.
$ ./eul145 +RTS -s -N2 608720 16,000,063,624 bytes allocated in the heap 403,384 bytes copied during GC 47,184 bytes maximum residency (2 sample(s)) 30,640 bytes maximum slop 2 MB total memory in use (0 MB lost due to fragmentation) Tot time (elapsed) Avg pause Max pause Gen 0 15852 colls, 15852 par 0.34s 0.17s 0.0000s 0.0037s Gen 1 2 colls, 1 par 0.00s 0.00s 0.0001s 0.0001s Parallel GC work balance: 43.86% (serial 0%, perfect 100%) TASKS: 4 (1 bound, 3 peak workers (3 total), using -N2) SPARKS: 4 (3 converted, 0 overflowed, 0 dud, 0 GC'd, 1 fizzled) INIT time 0.00s ( 0.00s elapsed) MUT time 314.85s (160.08s elapsed) GC time 0.34s ( 0.17s elapsed) EXIT time 0.00s ( 0.00s elapsed) Total time 315.20s (160.25s elapsed) Alloc rate 50,817,657 bytes per MUT second Productivity 99.9% of total user, 196.5% of total elapsed
with modification. The operating time is approximately halved, and the distribution is reduced 100 times.