Monday, January 26, 2009

To Tail Recurse or Not - Part 2 (A follow up for Haskell)

In an earlier post, I had talked about tail recursion and tail call optimization and we saw that it really needs hard benchmarking before deciding to transform a body recursive call to a tail recursive one. The entire discussion was based on strict evaluation, where the arguments to a function are always evaluated completely before the function is applied. Languages like Haskell offer laziness as the default order of evaluation. Also called non-strict evaluation, in Haskell, arguments to a function are not evaluated unless they are actually used in the evaluation of the function body. In this post, we will see some of the non obvious impacts that lazy evaluation can have on tail recursive functions. And we conclude this post validating an earlier suggestion that I made on my preference to use combinators instead of bare recursion.

Here are the implementations of foldl and foldr in Haskell ..

foldr f z []     = z                  
foldr f z (x:xs) = f x (foldr f z xs)  

foldl f z []     = z
foldl f z (x:xs) = foldl f (f z x) xs



foldl is tail recursive, while foldr is not. But throw in Haskell's non strict evaluation strategy in the mix and we find some non-obvious consequences in using the supposedly optimized foldl against foldr.

In case of foldl, Haskell will defer the evaluation of the entire result till the end of the list is reached. And this can lead to a stack overflow if the final expression is big enough. Hence foldr, despite not being tail recursive is a better option in many cases. And Haskell also offers a strict version of foldl (fold') that forces the evaluation of the initial parameter before making the recursive call.

Here is an example of summing over a list, that uses accumulator based summing and tail recursion ..

sumList acc [] = acc
sumList acc (x:xs) = sumList (acc+x) xs


When I try ..

Main> print . sumList 0 . take 1000000 $ [1..]
*** Exception: stack overflow


It's tail recursive, but lazy evaluation keeps accumulating the result till the end of the list is reached, and ultimately results in a stack overflow.

Instead try the foldl' combinator ..

sumList’ xs = foldl’ (+) 0 xs


and you will be happy !!

Let us take a look at how exactly Haskell evaluates an expression lazily. Instead of evaluating a value directly, Haskell uses a machinery called *thunk* to abstract the computation of the expression. Thunks contain instructions to compute the value, which will be activated when the evaluation is forced. Let us consider the following snippet, which takes out every third element from the input list ..

thirds [] = []
thirds [x] = []
thirds (_:_:x:xs) = x : thirds xs


Consider the interesting recursive case above. Haskell produces a thunk that contains a list cons cell. The cons cell contains the element value (x) and another yet unevaluated thunk for the rest of the list. The function thirds is lazy and will consume only that part of the input which is required to produce the result. Hence if we request for the first element of the result list, it gets back instantly without evaluating the rest of the list. Hence if we compute

head $ thirds [1..10000000]


we get the first element 3 in constant time. Another important advantage of lazy evaluation is that Haskell can handle infinite data structures seamlessly. In the above example, we can change the invocation to

head $ thirds [1..]


and will get back 3 instantly and in constant time.

Now consider what happens if we implement the same function using tail recursion .

thirds_tc :: [Int] -> [Int] -> [Int]
thirds_tc acc [] = reverse acc
thirds_tc acc [x] = reverse acc
thirds_tc acc (_:_:x:xs) = thirds_tc (x:acc) xs


Nice and tail recursive .. but to get the first element the function takes O(n) time. Try .

head $ thirds_tc [] [1..10000000]


and compare the difference in time with the earlier implementation.

So, as we find, tail recursion is not a be-all and end-all with respect to optimization even in case of non strict languages like Haskell. And use combinators like foldl' or foldr depending on your application requirements. Combinators abstract away many of the language semantics and can make lots of high level optimizations that may not be obvious while programming with bare recursion.

Thursday, January 22, 2009

Seeking Scala equivalent for this Haskell snippet

I was going through an old post of Tom Moertel, where he defines a cute function in Haskell, clusterBy, that clusters an aggregate based on a signature function. Here are some examples of its usage from his post ..

*Main> let antwords = words "the tan ant gets some fat"

*Main> clusterBy length antwords
[["the","tan","ant","fat"],["gets","some"]]

*Main> clusterBy head antwords
[["ant"],["fat"],["gets"],["some"],["the","tan"]]

*Main> clusterBy last antwords
[["the","some"],["tan"],["gets"],["ant","fat"]]


The Haskell implementation uses Arrows and is super elegant and concise ..

import Control.Arrow ((&&&))
import qualified Data.Map as M

clusterBy :: Ord b => (-> b) -> [a] -> [[a]]
clusterBy f = M.elems . M.map reverse . M.fromListWith (++) . map (&&& return)


I was trying to implement the same in Scala, but could only proceed to the extent of having separate functions clusterByLength, clusterByHead, clusterByLast etc. ..

val l: List[String] = List("the tan ant gets some fat".split(" "): _*)

def clusterByLength = {
  val m = new HashMap[Int, List[String]]
  l.map {=> (x.length, x)}.foreach {=>
    m.put(v._1, v._2 :: m.get(v._1).getOrElse(List[String]()))
  }
  m.keys.toList.sort((e1, e2) => e1 < e2).map {=> m.get(x).get.reverse}
}

def clusterByHead = {
  val m = new HashMap[String, List[String]]
  l.map {=> (x.substring(0, 1), x)}.foreach {=>
    m.put(v._1, v._2 :: m.get(v._1).getOrElse(List[String]()))
  }
  m.keys.toList.sort((e1, e2) => e1 < e2).map {=> m.get(x).get.reverse}
}

def clusterByLast = {
  val m = new HashMap[String, List[String]]
  l.map {=> (x.substring(x.length - 1), x)}.foreach {=>
    m.put(v._1, v._2 :: m.get(v._1).getOrElse(List[String]()))
  }
  m.keys.toList.sort((e1, e2) => e1 < e2).map {=> m.get(x).get.reverse}
}


Is there any way that I can raise the level of abstraction for the above implementation, and have a Scala equivalent of the Haskell clusterBy ?

Sunday, January 18, 2009

Fizzbuzz and the State Monad

When I want to learn something I tend to look for examples that articulate the theme of the problem directly instead of making it accidentally complex. Simple examples make things easier, and I was searching for such an exmaple for understanding the State Monad in Haskell.

The State monad is useful for modeling computations in Haskell that maintain state, the bread and butter of programming in an imperative language. Haskell tutorials for beginners teach us how to pass on the old value of the state with each call of the function, and then extract the new state from the result. Commonly referred to as "threading the state through the computation", this strategy works, but tedius .. and Haskellers will consider this .. boilerplate.

The State monad raises the level of abstraction and decouples the state management from the computation itself. I am not going to write yet another tutorial on monads. Instead I would like to share the example problem which made me understand the State monad quite beautifully.

It is fizzbuzz !! Fizzbuzz has been solved in a multitude of languages of various paradigms. The basic problem that fizzbuzz solves is simple enough to be moulded into demonstrating various idioms of a programming language. Not all of them are efficient or elegant enough, but serves well articulating the idiom in hand - you don't have to stretch yourselves to comprehend the problem that you are trying to solve. You can focus on the solution itself right from the word go. Even Haskell has quite a few solutions to the fizzbuzz problem, but I could not find one that makes use of the State monad. The solution below generates the whole fizzbuzz sequence as a side-effect using monads. This is not the most elegant of solutions for fizzbuzz, does not work for infinite sequences, but I found it quite simple and useful in understanding the State monad in Haskell ..

Without further ado ..

import Monad
import Control.Monad.State

-- the entire fizzbuzz list is prepared as a side-effect
-- collect prepares the list as the State
-- mapM strips the values part

run_fizzbuzz :: [Integer] -> [[Char]]
run_fizzbuzz list =
    state where (_, state) = runState (do mapM_ collect list) []

collect :: Integer -> State [[Char]] ()

-- gets the old state and cons s the new
collect n = do 
    f <- get
    put ((fizzbuzz n):f)

fizzbuzz :: Integer -> [Char]
fizzbuzz n 
    | n `rem` 3 == 0 && n `rem` 5 == 0   = "fizzbuzz"
    | n `rem` 3 == 0                     = "fizz"
    | n `rem` 5 == 0                     = "buzz"
    | otherwise                          = show n

main :: IO ()
main = do putStr $ show $ reverse $ run_fizzbuzz [1..100]

Generic Repository and DDD - Revisited

Greg Young talks about the generic repository pattern and how to reduce the architectural seam of the contract between the domain layer and the persistence layer. The Repository is the contract of the domain layer with the persistence layer - hence it makes sense to have the contract of the repository as close to the domain as possible. Instead of a contract as opaque as Repository.FindAllMatching(QueryObject o), it is always recommended that the domain layer looks at something self revealing as CustomerRepository.getCustomerByName(String name) that explicitly states out the participating entities of the domain. +1 on all his suggestions.

However, he suggests using composition, instead of inheritance to encourage reuse along with encapsulation of the implementation details within the repository itself .. something like the following (Java ized)

public class CustomerRepository implements ICustomerRepository {
  private Repository<Customer> internalGenericRepository;

  public IEnumerable<Customer> getCustomersWithFirstNameOf(string _Name) {
    internalGenericRepository.fetchByQueryObject(
      new CustomerFirstNameOfQuery(_Name)); //could be hql or whatever
  }
}



Quite some time ago, I had a series of blogs on DDD, JPA and how to use generic repositories as an implementation artifact. I had suggested the use of the Bridge pattern to allow independent evolution of the interface and the implementation hierarchies. The interface side of the bridge will model the domain aspect of the repository and will ultimately terminate at the contracts that the domain layer will use. The implementation side of the bridge will allow for multiple implementations of the generic repository, e.g. JPA, native Hibernate or even, with some tweaking, some other storage technologies like CouchDB or the file system. After all, the premise of the Repository is to offer a transparent storage and retrieval engine, so that the domain layer always has the feel that it is operating on an in-memory collection.

// root of the repository interface
public interface IRepository<T> {
  List<T> read(String query, Object[] params);
}

public class Repository<T> implements IRepository<T> {

  private RepositoryImpl repositoryImpl;

  public List<T> read(String query, Object[] params) {
    return repositoryImpl.read(query, params);
  }

  //..
}



Base class of the implementation side of the Bridge ..

public abstract class RepositoryImpl {
  public abstract <T> List<T> read(String query, Object[] params);
}


One concrete implementation using JPA ..

public class JpaRepository extends RepositoryImpl {

  // to be injected through DI in Spring
  private EntityManagerFactory factory;

  @Override
  public <T> List<T> read(String query, Object[] params) {
    
  //..
}


Another implementation using Hibernate. We can have similar implementations for a file system based repository as well ..

public class HibernateRepository extends RepositoryImpl {
  @Override
  public <T> List<T> read(String query, Object[] params) {
    // .. hibernate based implementation
  }
}


Domain contract for the repository of the entity Restaurant. It is not opaque or narrow, uses the Ubiquitous language and is self-revealing to the domain user ..

public interface IRestaurantRepository {
  List<Restaurant> restaurantsByName(final String name);
  //..
}


A concrete implementation of the above interface. Implemented in terms of the implementation artifacts of the Bridge pattern. At the same time the implementation is not hardwired with any specific concrete repository engine (e.g. JPA or filesystem). This wiring will be done during runtime using dependency injection.

public class RestaurantRepository extends Repository<Restaurant>
  implements IRestaurantRepository {

  public List<Restaurant> restaurantsByEntreeName(String entreeName) {
    Object[] params = new Object[1];
    params[0] = entreeName;
    return read(
      "select r from Restaurant r where r.entrees.name like ?1",
      params);
  }
  // .. other methods implemented
}


One argument could be that the query string passed to the read() method is dependent on the specific engine used. But it can very easily be abstracted using a factory that returns the appropriate metadata required for the query (e.g. named queries for JPA).

How does this compare with Greg Young's solution ?

Some of the niceties of the above Bridge based solution are ..

  • The architecture seam exposed to the domain layer is NOT opaque or narrow. The domain layer works with IRestaurantRepository, which is intention revealing enough. The actual implementation is injected using Dependency Injection.

  • The specific implementation engine is abstracted away and once agian injected using DI. So, in the event of using alternative repository engines, the domain layer is NOT impacted.

  • Greg Young suggests using composition instead of inheritance. The above design also uses composition to encapsulate the implementation within the abstract base class Repository.


However in case you do not want to have the complexity or flexibility of allowing switching of implementations, one leg of the Bridge can be removed and the design simplified.

Sunday, January 11, 2009

Subsuming the Template Method pattern

It all started with this Kevin Rutherford post, where he talks about his unnatural urge to use the Template Method pattern. He mentions that whenever he sees a duplicate algorithm, he has a natural tendency is to push up the skeleton into a superclass. This way he creates too many inheritance hierarchies, which prove to be quite brittle in the face of changing algorithms.

Template method pattern mandates that the *template method* must model the invariant part of the algorithm, while the concrete classes will implement the variabilities that will be called back into the template method. While the pattern encourages implementation inheritance, which may lead to unnecessary coupling and brittle hierarchies, the pattern has been used quite effectively in various frameworks developed over times.

As the GoF book mentions, the template method pattern leads to an inverted control structure sometimes referred to as the Hollywood Principle, and find better usages in frameworks and libraries to decouple client extensions / implementations from the reusable invariance of the algorithm.

Outside the OO world, template method pattern has been put to great use by many programming languages and frameworks. In many of those applications, like many of the other design patterns, it has been subsumed within the language itself. Hence its usage has transcended into the idioms and best practices of the language itself without much of a ceremony or fanfare.

Consider this code snippet in Scala ..

def doForAll[A, B](l: List[A], f: A => B): List[B] = l match {
  case x :: xs => f(x) :: doForAll(xs, f)
  case Nil => Nil
}


The code processes the input list and invokes the user supplied function (f) on every element. Here the invariant part of the algorithm is the traversal of the list, while the hook is provided by the user supplied function. This follows the template method pattern, and is more popularly known as "map" in functional languages ..

List(1, 2, 3, 4).map {=> x * 2}


In languages that support higher order functions, template methods are ubiquitous, and used as a common idiom to promote framework style reusability. Languages like Haskell allow you to do more advanced stuff in the base abstraction through function composition and monadic sequencing, thereby making the invariant part more robust in the face of implementation variabilities. Even with IoC frameworks like Spring, template methods promote implementing monadic combinators (to the extent you can have in OO without HOF) that result in autowiring and sequencing operations.

Higher Order Template Methods through Message Passing

Template method pattern has two distinct aspects that combine to form the wholeness of the pattern - the commonality of the algorithm described by the base class and the concrete variabilities being injected by the derived implementations. Replace class by module and we have some great examples in Erlang / OTP Framework. Here is a fun example from Joe Armstrong's Programming Erlang ..

-module(server5).
-export([start/0, rpc/2]).

start() -> spawn(fun() -> wait() end).

wait() ->
    receive
        {become, F} -> F()
    end.

rpc(Pid, Q) ->
    Pid ! {self(), Q},
    receive
        {Pid, Reply} -> Reply
    end.


Erlang implements variability through message passing, dynamic typing and dynamic hot swapping of code. The server skeleton above really does nothing and awaits an impersonation request. On receiving the message {become, F}, the server behaves as an F. F models the variability part, while the above module is the invariant server logic. Feed it with a Factorial message and the server starts behaving as a Factorial server. Isn't this some template method on steroids ? But teh Erlang people never call it by this name. This is because, it is one of the most common idioms of Erlang programming, it needs no formal announcement in the name of a "design pattern". Of course, as I mentioned before, this does not invalidate the necessity of a design pattern - only difference with OO is that in some other languages, the pattern gets melded within the language syntax and idioms.

Sunday, January 04, 2009

Higher Order abstractions in Scala with Type Constructor Polymorphism

Abstractions at a higher level through type constructor polymorphism. Good type systems are expressive enough to conceal the implementation complexity, and expose *only* what it matters to the developer. People often cringe about the complexity of Scala's type system and how it serves as a barrier to the entry point in mainstream programming. As Michael Feathers recently noted on Tweeter, the unfortunate fact is that people often jump at the esoteric parts of a language before looking at the simpler subset, which he will be using 90% of the time. And, I think Scala has that sweet spot, where you need not mess around too much with variances, implicits and existentials and yet come up with a nice, concise and functional codebase.

In this post, I discuss the already introduced intimidating phrase "Type Constructor Polymorphism" through a series of code snippets ranging from toys to some real-world stuff. The idea is, once again, not to evangelize type theory intricacies, but share some of the experiences of how this feature in Scala's type system can help write idiomatic code, while staying away from the complexities of its underlying implementation.

Jump on ..

We have a list of Option[String] that we need to abstract over and compute some value. Say, for the sake of keeping the example simple, we will calculate the sum of lengths of all the strings present ..

val employees: List[Option[String]] =
  List(Some("dave"), None, Some("john"), Some("sam"))

val n: List[Int] =
  employees.map { x =>
    x match {
      case Some(name) => name.length
      case None => 0
    }
  }.elements.reduceLeft[Int](+ _)
println(n)


Let us take another problem that needs to abstract over a different List structure, a List of List of Strings, and compute the same result as before, i.e. the sum of lengths of all the strings encountered in the collection ..

val brs: List[List[String]] =
  List(List("dave", "john", "sam"), List("peter", "robin", "david"), List("pras", "srim"))

val m: List[Int] =
  brs.flatMap {=> x.map {_.length}}
     .elements.reduceLeft[Int](+ _)
println(m)


Do you see any commonality in the solution structure in the above snippets ? After all, the problem space has a common generic structure ..

  1. we have a List with some String values abstracted in different forms

  2. need to iterate over the list

  3. do some stuff with elements in the list and compute an Int value


Unfortunately the actual solution structures look quite different and have to deal a lot digging into the abstractions of the underlying representations within the collection itself. And this is because, we cannot abstract over the type constructor (the List in this case) that takes another type constructor as an argument (Option[String] in the first case and List[String] in the second case).

Enter type constructor polymorphism.

Sounds intimidating ? Maybe, but ask the Haskellers .. they have been using typeclasses ever since so successfully in comprehensions, parser combinators and embedded DSLs and programming at a different level of abstraction.

Scala supports type constructor polymorphism since 2.5, and the details have been discussed in a recent paper by Adriaan Moors et al in OOPSLA 2008.

Here is a snippet of the Scala code that works seamlessly for both of the above cases ..

val l: List[Int] = employees.flatMapTo[Int, List]{_.map{_.length}}
val sum: Int = l.elements.reduceLeft[Int](+ _)
println(sum)


The above code abstracts over List through higher order parametric polymorphism, i.e. independent of whether the List parameter is an Option[] or another List[]. Incidentally both of them (List and Option) are monads and flatMapTo abstracts a monadic computation, hiding all details of type constructor polymorphism from the developer.

Now here is some real life example (elided for simplicity) ..

Here are the simple domain models for Customer, Instrument and Trade, used for modeling a use case where a Customer can order for the Trade of an Instrument in an exchange.

case class Customer(id: Int, name: String)
case object nomura extends Customer(1, "NOMURA")
case object meryll extends Customer(2, "MERYLL")
case object lehman extends Customer(3, "LEHMAN")

case class Instrument(id: Int)
case object ibm extends Instrument(1)
case object sun extends Instrument(2)
case object google extends Instrument(3)

case class Trade(ref: Int, ins: Instrument, qty: Int, price: Int)


And we fetch the following list through a query from the database. It is a List of tuples where each tuple consists of a Customer and a trade that has been executed based on the Order he placed at the exchange. And here is the snippet of the code that computes the sum total of the values of all trades executed in the day for all customers.

val trades: List[(Customer, Option[Trade])] =
  List((nomura, Some(Trade(100, ibm, 20, 12))),
       (meryll, None), (lehman, Some(Trade(200, google, 10, 10))))

val ts: List[Option[Trade]] = trades.map(_._2)
val t: List[Int] = ts.flatMapTo[Int, List]{_.map{=> x.qty * x.price}}
val value = t.elements.reduceLeft[Int](+ _)
println(value)


Really not much different from the above simple cases where we were dealing with toy examples - isn't it ? The structure of the solution is the same irrespective of the complexity of data being stored within the collections. The iteration is being done at a much higher level of abstraction, independent of the types stored within the container. And as I mentioned above, flatMapTo is the secret sauce in the above solution structure that abstracts the building of the new container hiding the inner details. To get more into the innards of flatMapTo and similar higher order abstractions, including the new form of Iterable[+T], have a look at the OOPSLA paper referenced above.

Postscript: In all the snippets above, I have been explicit about all type signatures, just for the sake of easier understanding of my passionate reader. In reality, almost all of these will be inferred by Scala.