Consider the function
add ab = a + b
It works:
*Main> add 1 2 3
However, if I add a type signature, indicating that I want to add things of the same type:
add :: a -> a -> a add ab = a + b
I get an error message:
test.hs:3:10: Could not deduce (Num a) from the context () arising from a use of `+' at test.hs:3:10-14 Possible fix: add (Num a) to the context of the type signature for `add' In the expression: a + b In the definition of `add': add ab = a + b
This way, GHC can explicitly infer that I need a type restriction of Num , as it just told me:
add :: Num a => a -> a -> a add ab = a + b
Works.
Why does the GHC require me to add a type constraint? If I do general programming, why can't it work only on everything that the + operator can use?
In C ++ template programming, you can do this easily:
#include <string> #include <cstdio> using namespace std; template<typename T> T add(T a, T b) { return a + b; } int main() { printf("%d, %f, %s\n", add(1, 2), add(1.0, 3.4), add(string("foo"), string("bar")).c_str()); return 0; }
The compiler calculates add argument types and generates a version of the function for this type. There seems to be a fundamental difference in the Haskell approach, can you describe it and discuss trade-offs? It seems to me that this would be allowed if the GHC just filled in the type restriction for me, as it obviously decided that it was necessary. However, why a type restriction at all? Why not just compile it successfully while the function is used only in a valid context where the arguments are in Num ?
type-inference type-systems haskell typeclass
Steve
source share