Here are the implementations of
foldl
and foldr
in Haskell ..foldr f z [] = z
foldr f z (x:xs) = f x (foldr f z xs)
foldl f z [] = z
foldl f z (x:xs) = foldl f (f z x) xs
foldl
is tail recursive, while foldr
is not. But throw in Haskell's non strict evaluation strategy in the mix and we find some non-obvious consequences in using the supposedly optimized foldl
against foldr
.In case of
foldl
, Haskell will defer the evaluation of the entire result till the end of the list is reached. And this can lead to a stack overflow if the final expression is big enough. Hence foldr
, despite not being tail recursive is a better option in many cases. And Haskell also offers a strict version of foldl
(fold'
) that forces the evaluation of the initial parameter before making the recursive call.Here is an example of summing over a list, that uses accumulator based summing and tail recursion ..
sumList acc [] = acc
sumList acc (x:xs) = sumList (acc+x) xs
When I try ..
Main> print . sumList 0 . take 1000000 $ [1..]
*** Exception: stack overflow
It's tail recursive, but lazy evaluation keeps accumulating the result till the end of the list is reached, and ultimately results in a stack overflow.
Instead try the
foldl'
combinator ..sumList’ xs = foldl’ (+) 0 xs
and you will be happy !!
Let us take a look at how exactly Haskell evaluates an expression lazily. Instead of evaluating a value directly, Haskell uses a machinery called *thunk* to abstract the computation of the expression. Thunks contain instructions to compute the value, which will be activated when the evaluation is forced. Let us consider the following snippet, which takes out every third element from the input list ..
thirds [] = []
thirds [x] = []
thirds (_:_:x:xs) = x : thirds xs
Consider the interesting recursive case above. Haskell produces a thunk that contains a list cons cell. The cons cell contains the element value (x) and another yet unevaluated thunk for the rest of the list. The function
thirds
is lazy and will consume only that part of the input which is required to produce the result. Hence if we request for the first element of the result list, it gets back instantly without evaluating the rest of the list. Hence if we compute head $ thirds [1..10000000]
we get the first element 3 in constant time. Another important advantage of lazy evaluation is that Haskell can handle infinite data structures seamlessly. In the above example, we can change the invocation to
head $ thirds [1..]
and will get back 3 instantly and in constant time.
Now consider what happens if we implement the same function using tail recursion .
thirds_tc :: [Int] -> [Int] -> [Int]
thirds_tc acc [] = reverse acc
thirds_tc acc [x] = reverse acc
thirds_tc acc (_:_:x:xs) = thirds_tc (x:acc) xs
Nice and tail recursive .. but to get the first element the function takes O(n) time. Try .
head $ thirds_tc [] [1..10000000]
and compare the difference in time with the earlier implementation.
So, as we find, tail recursion is not a be-all and end-all with respect to optimization even in case of non strict languages like Haskell. And use combinators like
foldl'
or foldr
depending on your application requirements. Combinators abstract away many of the language semantics and can make lots of high level optimizations that may not be obvious while programming with bare recursion.
2 comments:
Try this:
sumList acc [] = acc
sumList acc (x:xs) = (sumList $! acc+x) xs
Thank you. Nice observation, I'll actually strengthen your conclusion to say that the notion of proper tail recursion is completely irrelevant to haskell, here:
http://muaddibspace.blogspot.com/2008/08/tail-call-optimization-doesnt-exist-in.html
Post a Comment