I think the short answer is that the approach to failure in Haskell is Monoids . Whenever you want to combine many things into one thing, think Monoids . Adding is a great example:
1 + 2 + 4 + 0 + 3 = 10.
When adding numbers, the value is of type no-op 0 . You can always add it, and it will not change the result. Monoids generalize this concept, and Haskell names the value no-op mempty . This is how you remove elements from your combination (in your example, you drop values that are not shared evenly). + - combiner. Haskell calls it mappend . There is an abbreviated character for this: <> .
Multiplication is a monoid, and mempty is 1 , the adder * .
Strings are also monoids. mempty value is "" , adder ++ ;
So here is a very simple implementation of your function using monoids:
import Data.Monoid f :: Int -> String -> String f arg str = str <> modsBy 2 "a" <> modsBy 3 "b" <> modsBy 5 "c" where modsBy nv = if arg `mod` n == 0 then v else mempty
The optimal thing is that since monoids generalize the concept, you can easily generalize this function so that it creates any monoid, not just a string. You can, for example, pass to the list of dividers, monoid pairs and some initial monoid to start with, and whenever the divider is evenly divided, you add a monoid:
f :: Monoid a => Int -> a -> [(Int, a)] -> a f arg initial pairs = initial <> mconcat (map modsBy pairs) where modsBy (n, v) = if arg `mod` n == 0 then v else mempty
mconcat simply merges the list of monoids together.
So, your initial example can now be run, for example:
> f 10 "foo" [(2,"a"), (3,"b"), (5,"c")] "fooac"
But you can just as easily create a number:
> f 10 1 [(2,1), (3,2), (5,3)] 5
One of the great things about Haskell is capturing and summarizing many of the concepts that I did not even imagine were there. Monoids come very conveniently and whole application architectures can be created on them.