This is a good thing, IMO, because it pushes the envelope of what you can do with the languageBeing a multi-paradigm language, I find Scala a wonderful mix. It has a very clean object model powered by its type system that helps programmers design scalable component abstractions. Scala also has a rich support for functional programming, which, though not as clean as Haskell, goes along very well complementing its OO capabilities. And from this perspective Scala offers an equal opportunity to developers coming from OO or FP paradigms.
In this post I will discuss one issue that can be solved elegantly in Scala using both of its paradigms. The OO version of the solution uses the elegance of mixins, abstract vals and the Cake pattern. While the functional version uses currying and partial applications.
Interface & Implementation
One of the recommended practices that we as software developers follow while designing domain models is to separate out the interface from an implementation. I am calling them by the terms interface and implementation. Replace them with the terminologies from your favorite language - contract, protocol, class, type .. etc. etc. But you know what I mean to say - distinguish between the nature of coupling of the generic and the specific parts of your abstraction.
Many people call this the Dependency Injection, where the actual implementation dependency is injected into your abstraction during runtime, resulting in reduced coupling. BTW I am not talking about DI frameworks, which may seem bolted on both in languages with a sane type system and those which don't have a static type system.
The OO way in Scala
Let's first consider how we can do dependency injection in Scala using the power of its object system. It has been covered in gory details by Jonas Boner in one of his Real World Scala blog posts. The example that I give here follows the same design that he used ..
Consider two abstractions that we use in our domain model for getting data out to the external world ..
// a repository for interacting with the underlying data store
trait TradeRepository {
def fetch(refNo: String): Trade
def write(t: Trade): Unit
}
// a domain service
trait TradeService {
def fetchTrade(refNo: String): Trade
def writeTrade(trade: Trade): Unit
}
And now the component for each of the above abstractions that contain the default implementations ..
trait TradeRepositoryComponent {
val tradeRepo: TradeRepository
class TradeRepositoryImpl extends TradeRepository {
def fetch(refNo: String): Trade = //..
def write(t: Trade): Unit = //..
}
}
trait TradeServiceComponent{ this: TradeRepositoryComponent => // self type annotation that indicates the dependency
val tradeService: TradeService
class TradeServiceImpl extends TradeService {
def fetchTrade(refNo: String) = tradeRepo.fetch(refNo)
def writeTrade(trade: Trade) = tradeRepo.write(trade)
}
}
Note how the self-type annotation is used in
TradeServiceComponent
to indicate a dependency on TradeRepositoryComponent
. But still we are talking in terms of traits, without committing our final assembly to any specific object implementations. The abstract vals tradeRepo
and tradeService
have still not been materialized in terms of any concrete implementations. Thus we delay coupling to any implementation till the time we absolutely need them. And that is when we make the final assembly ..// we are wiring up the components
object TradeServiceAssembly extends TradeRepositoryComponent with TradeServiceComponent {
val tradeRepo = new TradeRepositoryImpl // impl
val tradeService = new TradeServiceImpl // impl
}
Now we have the final object which encapsulates all implementation details and which we can use thus ..
// usage
import TradeServiceAssembly._
val t = tradeService.fetchTrade("r-123")
tradeService.writeTrade(t)
If we want to use different implementations (e.g. mocks for testing), we can either create another assembly module like the above and supply different implementation clasess to the abstract vals, or we can also directly mix in traits while we instantiate our assembly object ..
val assembly = new TradeRepositoryComponent with TradeServiceComponent {
val tradeRepo = new TradeRepositoryMock
val tradeService = new TradeServiceMock
}
import assembly._
val t = tradeService.fetchTrade("r-123")
tradeService.writeTrade(t)
So that was the implementation of DI using *only* the typesystem of Scala. All dependencies are indicated through self-type annotations and realized through concrete implementations specified in abstract vals right down to the object that you create as the final assembly. Shows the power of Scala's object model implemented over its type system.
The FP way in Scala
Now let's try to look at the same idiom through the functional lens of Scala.
We will still have the repository abstractions, but we implement the service contracts directly as functions.
trait TradeRepository {
def fetch(refNo: String): Trade
def update(trade: Trade): Trade
def write(trade: Trade): Unit
}
// service functions
trait TradeService {
val fetchTrade: TradeRepository => String => Trade = {repo => refNo => repo.fetch(refNo)}
val updateTrade: TradeRepository => Trade => Trade = {repo => trade => //..
val writeTrade: TradeRepository => Trade => Unit = {repo => trade => repo.write(trade)}
}
Now let's say we would like to work with a Redis based implementation of our
TradeRepository
. So somewhere we need to indicate the actual TradeRepository
implementation class that the service functions need to use. We can define partial applications of each of the above functions for Redis based repository and put them in a separate module ..object TradeServiceRedisContext extends TradeService {
val fetchTrade_c = fetchTrade(new TradeRepositoryRedis)
val updateTrade_c = updateTrade(new TradeRepositoryRedis)
val writeTrade_c = writeTrade(new TradeRepositoryRedis)
}
So
fetchTrade_c
is now a function that has the type (String) => Trade
- we have successfully abstracted the TradeRepository
implementation class knowledge through currying of the first parameter. These modules are somewhat like Spring
ApplicationContext
that can be swapped in and out and replaced with alternate implementations for other kinds of underlying storage. As with the OO implementation, you can plug in a mock implementation for testing.We can now continue to use the curried versions of the service functions completely oblivious of the fact that a Redis based
TradeRepository
implementation has been sucked into it ..import TradeServiceRedisContext._
val t = fetchTrade_c("ref-123")
writeTrade_c(t)
One of the potential advantages that you get with functional abstractions is the power of composability, which is, much better than what you get with objects. FP defines many abstraction models that composes in the mathematical sense of the term. If you can design your domain abstractions in compliance with these structures, then you can also get your models to compose as beautifully.
Instead of currying individual functions as above, we can curry a composed function ..
val withTrade = for {
t <- fetchTrade
u <- updateTrade
}
yield(t map u)
withTrade
is now a function of type (TradeRepository) => (String) => Trade
. In order to make this work, you will need to have scalaz which defines higher order abstractions to make operations like bind (flatMap
) available to a much larger class of abstractions than those provided by the Scala standard library. In our case we are using the function application as a monad. We can now inject the Redis based implementation directly into this composition ..val withTradeRedis = withTrade(new TradeRepositoryRedis)
I find both variants quite elegant to implement in Scala. I use the version that goes better with my overall application design. If it's primarily an OO model, I go with the Cake based serving. When I am doing FP and scalaz, I use the functional approach. One advantage that I find with the functional approach is increased composability since free standing functions only need to agree on types to compose. While with objects, you need to cross over another layer of indirection which may not be that easy in some cases.
14 comments:
I think we can get even better composability (in both approaches) if the write method also returns a Trade instead of Unit.
This is especially useful because you can get a Trade object with a "fresh" identifier and directly call another function on that object.
This is why I came to consider that a method returning Unit should get a warning!
Eric -
Agreed. The main focus of the article was not composability. I wanted to discuss the virtues of Scala as a multi-paradigm language - OO and functional. And how a typical problem can be solved using either OO or FP in the language.
But you are correct on the Unit part. Here I was considering "write" only as a side-effect. In case u missed it, I discuss composability of domain models in another post which I wrote some time back .. http://debasishg.blogspot.com/2010/12/composable-domain-models-using-scalaz.html
This is a nice comparative of each approach Debasish - thanks for posting it :-)
Very nice. Last September we had Jason Zaugg present scalaz at New York Scala Enthusiasts. I had not taken the leap far enough into FP to start using scalaz, but now I see a very practical reason to do so.
Thanks for the great write up, I've been exploring Scala composition patterns for the last few weeks and I'm excited by how powerful the language is.
In addition to the patterns you describe I've found that the lift web framework is using the service locator pattern as an alternative to dependency injection. The service locator pattern allows for dynamic interchange of implementations at runtime.
The lift implementation is described here:
http://simply.liftweb.net/index-8.2.html
Hi Justin -
In case it helps, I have done some blog posts on scalaz and its practical implications in domain modeling. Feel free to have a look at http://debasishg.blogspot.com/search/label/scalaz ..
Thanks for the great posts (and all the others especially on scalaz)
However regarding dependency injection in the OO way I do not realy get what the advantage of the cake-pattern is compared to a plain constructor injection.
Why do you have components? Why not just ie TradeServiceImpl(tradeRepository:TradeRepository) extends TradeServiceImpl {..} and than inject the dependency via the constructor like you would do in Java? That seems for the OO style much simpler and direct with less code.
Functional approach is interesting. The only thing I don't like is the funny names for curried functions with "_c" at the end. I wish it were possible to overwrite/"reuse" original name.
val fetchTrade = fetchTrade(new TradeRepositoryRedis)
Two points:
a) If you use in the OO-style the ordinary constructor based DI than you immidiately see the
similarity to the FP-style:
TradeServiceImpl(val tradeRepository:TradeRepository) {}
val redisRep = new RedisTradeRepositoryImpl(....);
val fetchTrade_c = new TradeServiceImpl(redisRep).fechtTrade(_,_);
val updateTrade_c = new TradeServiceImpl(redisRep).updateTrade_c(_,_);
The similarity would be even clearer if scala would support full straight currying.
b) A service-layer seldom has just three methods/functions like in the example. It generally has tens or even hundreds and
than the FP approach used here becomes very typing-intensive (und hard to maintain). Depending on the FP language there are
different nice solutions to that.
Generally I can only recommend anyone who wants to do "more functional" in scala to take a deeper look at lisp (clojure), haskell or an ML
dialect (ie on the JVM the yeti language). When you are restricted to full functional-programming you just learn better
where it is good and how things can be done. And knowing this will help a lot to use the powerful features of scala much better instead of getting lost in them.
I found very interesting this post, especially the Dependency Injection part. It's very elegant and useful, but a container is still needed IMO, because, with that approach you won't be able to inject dependencies at load-time, with a configuration file, for example.
Thanks Debasish, very informative as usual.
@onof: I personally consider wiring in dependencies at load time (ala Spring xml) extremely harmful. The large a spring app grows, the larger the context(s) become, and the longer it takes for the app to load. When there's something in the xml wiring that causes a runtime error, it is a pain to hunt down. On the other hand, with a method like this, or even using a container like guice, the dependencies are statically known, and therefore proven at compile time. Huge win.
The only potential issue I see with a container-less approach like this is that you don't get lifecycle control over the dependencies.
Thanks for making that comparison
Excellent post
Hello,
I was doing research on exactly the same subject not long ago (see e.g. http://www.warski.org/blog/?p=291) looking for a good way to replace the DI as it is known from Java.
Unfortunately both solutions have problems:
Cake pattern:
* not possible to parametrize my system with a component implementation (in case there are a lot of components, creating two assemblies can lead to lots of code duplication)
* self-types are not inherited, if I extend a component trait e.g. to configure it (if it provides some abstract vals) the self-types need to be repeated
* no control over initialization order (which can lead to NPEs during assembly construction)
Only functions:
* no auto-wiring - the functions have to be applied to the implementations - as can be seen in your example, you have to provide the repository impl three times
* every function needs to have all dependencies enumerated - instead of specifying the dependencies once
* the dependencies are expressed in the interface, not the implementation
I guess the second approach is better (by looking at the list of problems, for example), however it doesn't quite seen to be the "scala" way.
Do you maybe have some experiences in resolving the above problems?
Regards,
Adam Warski
(Since you are also a CouchDB user).
What do you think - would be profitable for Scala if CouchDB would be implemented in Scala ? Erlang is pure functional and Scala has very powerful support for functional programming.
Post a Comment