One statement that resonates a lot with my thought is

*"DDD encourages understanding of the domain, but don't implement the models"*. DDD does a great job in encouraging developers to understand the underlying domain model and ensuring a uniform vocabulary throughout the lifecycle of design and implementation. This is what design patterns also do by giving you a vocabulary that you can heartily exchange with your fellow developers without influencing any bit of implementation of the underlying pattern.

On the flip side of it, trying to implement DDD concepts using standard techniques of OO with joined state and behavior often gives you a muddled mutable model. The model may be rich from the point of view that you will find all concepts related to the particular domain abstraction baked in the class you are modeling. But it makes the class fragile as well since the abstraction becomes more locally focused losing the global perspective of reusability and composability. As a result when you try to compose multiple abstractions within the domain service layer, it becomes too much polluted with glue code that resolves the impedance mismatch between class boundaries.

So when Dean claims

*"Models should be anemic"*, I think he means to avoid this bundling of state and behavior within the domain object that gives you the false sense of security of richness of the model. He encourages the practice that builds domain objects to have the state only while you model behaviors using standalone functions.

Sometimes, the elegant implementation is just a function. Not a method. Not a class. Not a framework. Just a function.

— John Carmack (@ID_AA_Carmack) March 31, 2011

One other strawman argument that I come across very frequently is that bundling state and behavior by modeling the latter as methods of the class increases encapsulation. If you are still a believer of this school of thought, have a look at Scott Meyer's excellent article which he wrote as early as 2000. He eschews the view that a class is the right level of modularization and encourages more powerful module systems as better containers of your domain behaviors.

As continuation of my series on functional domain modeling, we continue with the example of the earlier posts and explore the theme that Dean discusses ..

Here's the anemic domain model of the Order abstraction ..

case class Order(orderNo: String, orderDate: Date, customer: Customer, lineItems: Vector[LineItem], shipTo: ShipTo, netOrderValue: Option[BigDecimal] = None, status: OrderStatus = Placed)

In the earlier posts we discussed how to implement the Specification and Aggregate Patterns of DDD using functional programming principles. We also discussed how to do functional updates of aggregates using data structures like Lens. In this post we will use these as the building blocks, use more functional patterns and build larger behaviors that model the ubiquitous language of the domain. After all, one of the basic principles behind DDD is to lift the domain model vocabulary into your implementation so that the functionality becomes apparent to the developer maintaining your model.

The core idea is to validate the assumption that building domain behaviors as standalone functions leads to an effective realization of the domain model according to the principles of DDD. The base classes of the model contain only the states that can be mutated functionally. All domain behaviors are modeled through functions that reside within the module that represents the aggregate.

Functions compose and that's precisely how we will chain sequence of domain behaviors to build bigger abstractions out of smaller ones. Here's a small function that values an Order. Note it returns a

`Kleisli`

, which essentially gives us a composition over monadic functions. So instead of composing `a -> b`

and `b -> c`

, which we do with normal function composition, we can do the same over `a -> m b`

and `b -> m c`

, where `m`

is a monad. Composition with effects if you may say so. def valueOrder = Kleisli[ProcessingStatus, Order, Order] {order => val o = orderLineItems.set( order, setLineItemValues(order.lineItems) ) o.lineItems.map(_.value).sequenceU match { case Some(_) => right(o) case _ => left("Missing value for items") } }

But what does that buy us ? What exactly do we gain from these functional patterns ? It's the power to abstract over families of similar abstractions like applicatives and monads. Well, that may sound a bit rhetoric and it needs a separate post to justify their use. Stated simply, they encapsulate effects and side-effects of your computation so that you can focus on the domain behavior itself. Have a look at the

`process`

function below - it's actually a composition of monadic functions in action. But all the machinery that does the processing of effects and side-effects are abstracted within the `Kleisli`

itself so that the user level implementation is simple and concise. With

`Kleisli`

it's the power to compose over monadic functions. Every domain behavior has a chance of failure, which we model using the `Either`

monad - here `ProcessingStatus`

is just a type alias for this .. `type ProcessingStatus[S] = \/[String, S]`

. Using the `Kleisli`

, we don't have to write any code for handling failures. As you will see below, the composition is just like the normal functions - the design pattern takes care of alternate flows.Once the

`Order`

is valued, we need to apply discounts to qualifying items. It's another behavior that follows the same pattern of implementation as `valueOrder`

.def applyDiscounts = Kleisli[ProcessingStatus, Order, Order] {order => val o = orderLineItems.set( order, setLineItemValues(order.lineItems) ) o.lineItems.map(_.discount).sequenceU match { case Some(_) => right(o) case _ => left("Missing discount for items") } }

Finally we check out the

`Order `

..def checkOut = Kleisli[ProcessingStatus, Order, Order] {order => val netOrderValue = order.lineItems.foldLeft(BigDecimal(0).some) {(s, i) => s |+| (i.value |+| i.discount.map(d => Tags.Multiplication(BigDecimal(-1)) |+| Tags.Multiplication(d))) } right(orderNetValue.set(order, netOrderValue)) }

And here's the service method that composes all of the above domain behaviors into the big abstraction. We don't have any object to instantiate. Just plain function composition that results in an expression modeling the entire flow of events. And it's the cleanliness of abstraction that makes the code readable and succinct.

def process(order: Order) = { (valueOrder andThen applyDiscounts andThen checkOut) =<< right(orderStatus.set(order, Validated)) }In case you are interested in the full source code of this small example, feel free to take a peek at my github repo.

## 3 comments:

I could not have said it better myself... and I didn't ;)

Value the insight that even though OO has an affinity to DDD. Coupling, state & the drawbacks of encapsulation have to be mitigated with FP

Thank you for writing this article and pointing to the presentation that triggered your train of thoughts.

I was wondering if the version you created would actually make the code more 'readable and succinct' (as you say). So I thought I'd make a non-anemic version.

When I forked your source code and started read I was quite shocked. The code was very hard to understand! This is probably caused by the fact that I have little experience with functional programming and little to no experience with ScalaZ.

I do understand the concepts of Monoid, Monad, Monad Tranformers, Type Classes, Lenses and sequence (A[B[_]] => B[A[_]]). I do not know a lot about their implementation in ScalaZ, but I do know the disjunction type.

So my first step was to refactor the code so that I would be able to understand it more easily. You can find the result of that refactoring here: https://github.com/eecolor/scala-snippets/blob/master/src/main/scala/aggregate.scala

I have placed comments in the code and I wonder if you agree with changes I made. If not I am eager to hear your opinion.

The second step was a non-anemic version of the example, that one can be found here: https://github.com/EECOLOR/scala-snippets/blob/master/src/main/scala/aggregateNonAnemic.scala

I wonder what you think about the non-anemic version and I hope you can shoot holes in it. My goal is to learn more and I am probably missing some cases where the non-anemic version would fail.

English is not my native language, so I might come across the wrong way. I understand that the code you wrote is a simplified example and that some choices might look too complicated as a result of that. I am not trying to bash your code or opinions, just trying to find the value in your approach.

Post a Comment