First, make sure you compile -O2 . This leads to serious performance issues :)
The first problem I see is that null is just a variable name. You want [] . This is equivalent here because the only parameters are x:xs and [] , but it will not always be.
The problem here is simple: when you call sum [1,2,3,4] , it looks like this:
1 + (2 + (3 + (4 + 0)))
without reducing any of these additions to a number, due to Haskell's lax semantics. The solution is simple:
myAdd = myAdd' 0 where myAdd' !total [] = total myAdd' !total (x:xs) = myAdd' (total + x) xs
(To compile this file, you will need {-# LANGUAGE BangPatterns #-} at the top of the source file.)
This accumulates the addition in another parameter and is actually tail recursive (yours is not, + is in the tail position, not myAdd ). But actually this is not exactly tail recursion, which we care about at Haskell; this distinction mainly refers to strict languages. The secret here is the explosion pattern on total : it forces it to be evaluated every time myAdd' is myAdd' , so no unvalued add-ons are created, and it works in a constant space. In this case, the GHC can actually figure this out with -O2 due to its rigor analysis, but I think it is usually best to clearly state that you want strict and what you are not doing.
Please note that if the addition was lazy, the definition of myAdd will work fine; the problem is that you are doing lazy traversal of the list with a strict operation that ultimately causes a stack overflow. This is mainly due to arithmetic, which is a string for standard numeric types (Int, Integer, Float, Double, etc.).
This is pretty ugly, and it would be painful to write something like this every time we want to write a strict warehouse. Fortunately, Haskell has an abstraction ready for it!
myAdd = foldl' (+) 0
(To compile this, you need to add import Data.List .)
foldl' (+) 0 [a, b, c, d] similar to (((0 + a) + b) + c) + d , except that for each application (+) (as we call the binary operator + as a function value) the value is forcibly evaluated. The resulting code is cleaner, faster, and easier to read (as soon as you know how lists are dumped, you can understand any definition written in terms of them easier than a recursive definition).
The main problem here is not that the compiler cannot figure out how to make your program effective - it makes it as efficient as you like, it can change its semantics, which optimization should never do. Haskell’s lax semantics certainly represent a learning curve for programmers in more “traditional” languages such as C, but it gets easier over time, and once you see the power and abstraction that Haskell’s rigor does not offer, you will never want to go back: )