Cost accounting

Let me explain what I mean by a cost-sensitive fold with an example: computing pi with arbitrary precision. We can use the Leibniz formula (not very effective, but nice and simple) and lazy lists:

pi = foldr1 (+) [(fromIntegral $ 4*(-1)^i)/(fromIntegral $ 2*i+1) | i<-[0..]] 

Now, obviously, this calculation will never be completed, because we have to calculate each value in an infinite list. But in practice, I donโ€™t need the exact value of pi, I just need this certain amount of decimal places. I could define pi 'as follows:

 pi' n = foldr1 (+) [(fromIntegral $ 4*(-1)^i)/(fromIntegral $ 2*i+1) | i<-[0..n]] 

but it is not at all obvious what value for n needs to be passed in order to get the desired accuracy. I need some kind of economical fold that will stop folding whenever I get the required accuracy. Is there such a fold?

(Note that in this case it is easy to make sure that we have achieved the required accuracy. Since the Leibniz formula uses a sequence that alternates a sign with each term, the error will always be less than the absolute value of the next term in the sequence.)

Edit: It would be great to have economical folds that could also take into account calculation time / energy consumption. For example, I want to get the most accurate pi value, given that I have 1 hour of calculation time and 10kW-hrs to spend. But I understand that this will no longer be strictly functional.

+4
source share
4 answers

My recommendation is to use a scan instead of a crease. Then go to the resulting list until you find the accuracy you need. A useful special case of left scan ( scanl ) is the iterate function:

 piList :: [Double] piList = map (4*) . scanl (+) 0 . map recip . iterate (\x -> -(x + 2 * signum x)) $ 1 

Now you can go through this list. For example, you can check when a change to a certain accuracy becomes invisible:

 findPrec :: (Num a, Ord a) => a -> [a] -> Maybe a findPrec p (x0:x1:xs) | abs (x1 - x0) <= p = Just x0 | otherwise = findPrec p (x1:xs) findPrec _ _ = Nothing 
+10
source

Haskell's way of doing this is to create an endless list of increasingly accurate answers, then reach and capture the one that has the correct accuracy.

 import Data.List (findIndex) pis = scanl (+) 0 [4*(-1)**i/(2*i+1) | i <- [0..]] accuracies = zipWith (\xy -> abs (xy)) pis (tail pis) piToWithin epsilon = case findIndex (<epsilon) accuracies of Just n -> pis !! n Nothing -> error "Wow, a non-terminating loop terminated!" 
+5
source

In the general case, the fold you ask does not exist. You must evaluate the accuracy yourself. This may be a problem in general, but all practically useful sequences have a reasonable upper bound for the numerical accuracy of partial sums usually obtained by someone else. However, I should advise you to read relevant textbooks, such as textbooks with numerical analysis, which usually have a part on estimating the sum of an infinite numerical sequence and give an upper bound.

However, there is a general rule: if the numerical process has a limit, then the numerical shifts approach zero as a rough geometric progression, therefore, if the next two shifts are 1.5 and 1.0, then the next shift will be somewhere around 0.6 and so on (itโ€™s better to accumulate such an assessment from the last few lists, and not just from two members). Using this rule and the equation for the sum of a geometric progression, you can usually find a reasonable estimate for numerical accuracy. Note: this is a rule of thumb (it has a name, but I forgot it), not a rigorous theorem.

In addition, the IEEE Double / Float representation has limited accuracy and at some point adding small numbers from the tail of the sequence will not change the calculated partial sum. You are advised to read about x86 floating point representation for this case, you can find your fold.

Summary: There is generally no solution, but usually in practice there are reasonable estimates for most useful sequences, usually derived from the literature for each type of sequence or numerical hardware limitations.

+2
source

Some good examples of what Daniel Wagner offers can be found in the article Why Functional Programming Issues

Specific examples from the article are: iterative search for roots, numerical differentiation and numerical integration.

+1
source

All Articles