Haskell designation against explicit variable

I have been studying Haskell for several weeks, and I have a question about using underscore ( _ ) as a function parameter. I think my question would be better asked with a concrete example. Let's say I want to define a function that retrieves a list item based on the provided index - yes, I understand that (!!) already predefined. The two ways I can define a function (I'm sure there are more) are as follows:

Version 1

 indexedElement :: [a] -> Int -> a indexedElement xs n | n < 0 = error "Index can not be negative." indexedElement [] n = error "Index must be smaller than the length of the list." indexedElement (x:xs) 0 = x indexedElement (x:xs) n = indexedElement xs (n - 1) 

Version 2

 indexedElement :: [a] -> Int -> a indexedElement _ n | n < 0 = error "Index can not be negative." indexedElement [] _ = error "Index must be smaller than the length of the list." indexedElement (x:_) 0 = x indexedElement (_:xs) n = indexedElement xs (n - 1) 

The two versions are obviously very similar. The only difference between the two is the use of an explicit variable or underscore. For me, _ means that literally everything can be written there, while an explicit variable of type n makes it more obvious that the argument must be an integer. For this reason, I prefer version 1; but the GHC source code for (!!) written as version 2. Is there a functional advantage of the second version? If not, will Haskell hardcore programmers release version 1? I understand the importance of a consistent way of writing code, so I try to follow the “unwritten rules” for programming in a particular language. This is an example that I prefer the first version, and I don’t think it makes the code more difficult to read. “I don’t know if this is connected with my past in pure mathematics or what, but I would like to hear that you are more experienced,” said Haskell vets.

+6
source share
1 answer

Is there a functional advantage of the second version?

I do not think they have any operational differences. But I think the second version is more readable. _ indicates that it is not used at all. Therefore, while reading the code, I can simply ignore it and just focus on other parameters. If in the first version I think that n defined, but perhaps the author forgot to use it? Or maybe an argument is not required. The second version simply avoids such mental overload. But this is just my opinion. :)

In fact, if you enable the warning flag ( -Wall ) and compile your code, it will issue a warning for your first version:

 [1 of 1] Compiling Main ( code.hs, code.o ) code.hs:2:16: Warning: Defined but not used: 'xs' code.hs:3:19: Warning: Defined but not used: 'n' code.hs:4:19: Warning: Defined but not used: 'xs' code.hs:5:17: Warning: Defined but not used: 'x' code.hs:8:17: Warning: Defined but not used: 'xs' 
+17
source

All Articles