Monday, July 31, 2006

Inside the New ConcurrentMap in Mustang

Tiger has offered a large number of killer goodies for Java developers. Some of them have enjoyed major focus in the community, like generics, enhanced for-loop, autoboxing, varargs, type-safe enums etc. But none has had the sweeping impact as the new java.util.concurrent. Thanks to Doug Lea, Java now boasts of the best library support for concurrent programming in the industry. Martin Fowler mentions about an interesting anecdote in his report on OOPSLA 2005, where he mentions
While I'm on the topic of concurrency I should mention my far too brief chat with Doug Lea. He commented that multi-threaded Java these days far outperforms C, due to the memory management and a garbage collector. If I recall correctly he said "only 12 times faster than C means you haven't started optimizing".

Indeed the concurrency model in Tiger has brought to mainstream programming the implementation of non-blocking algorithms and data structures, based on the very important concept of CAS. For a general introduction to CAS and nonblocking algorithms in Java 5, along with examples and implementations, refer to the Look Ma, no locks! article by Brian Goetz.

Lock Free Data Structures

The most common way to synchronize concurrent access to shared objects is usage of mutual exclusion locks. While Java has so long offerred locking at various levels as this synchronization primitive, with Tiger we have non blocking data structures and algorithms based on the Compare and Set primitive, available in all modern processors. CAS is a processor primitive which takes three arguments - the address of a memory location, an expected value and a new value. If the memory location holds the expected value, it is assigned the new value atomically. Unlike lock based approaches, where we may have performance degradation due to delay of the lock-holder thread, lock-free implementations guarantee that of all threads trying to perform some operations on a shared object, at least one will be able to complete within a finite number of steps, irrespective of the other threads' actions.

java.util.concurrent provides ample implementations of lock free algorithms and data structures in Tiger. All of these have been covered extensively in Brian Goetze's excellent book Java Concurrency In Practice, released in JavaOne this year - go get it, if u haven't yet.


I must admit that I am not a big fan of the management of Sun Microsystems, and the confused state of mind that Schwartz and his folks out there portray to the community. Innovation happens elsewhere - this has never been more true of the way the Java community has been moving. And this is what has kept Sun moving - the vibrant Java community have been the real lifeblood behind Java's undisputed leadership in the enterprise software market today (Ruby community - r u listening ?). The entire Java community is still working tirelessly to improve Java as the computing platform. Lots of research are still going on to increase the performance of memory allocation and deallocation in the JVM (see this). Lots of heads are burining out over implementing generational garbage collection, thread-local allocation blocks and escape analysis in Java. Doug Lea is still working on how to make concurrent programming easier for the mere mortals. This, I think is the main strength of the Java community - any other platform that promises more productivity has to walk (or rail) more miles to come up with something similar.

In this post, I will discuss one such innovation that has been bundled into Mustang. I discovered it only recently while grappling with the Mustang source drop and thought that ruminating on this exceptional piece of brilliance has its own worth and deserves a separate column of its own.

The New ConcurrentMap in Mustang

In Tiger we had ConcurrentHashMap as an implementation of the ConcurrentMap. Mustang comes with another variant of the map - the contract for ConcurrentNavigableMap and a brilliant implementation in ConcurrentSkipListMap. Have a look at the source code for this beast - you will never regret that data structures are there to encapsulate the guts and provide easy-to-use interfaces to the application developers.

Concurrent programming has never been easy and lock-free concurrency implementation is definitely not for the lesser mortals. We have been blessed that we have people like Doug Lea to take care of these innards and expose easy-to-use interfaces to us, the user community. Despite the fact that research for lock-free data structures has been going on for more than a decade, the first efficient and correct lock-free list-based set algorithm (CAS based) that is compatible with lock-free memory management methods, came out only in 2002. Lea's implementation of ConcurrentSkipListMap is based on this algorithm, although it uses a slightly different strategy for handling deletion of nodes.

Why SkipList ?

The most common data structure to implement sorted structures is a form of balanced tree. The current implementation of ConcurrentSkipListMap goes away from this route and uses the probabilistic alternative - skip lists. As Bill Pugh says

Skip lists are a data structure that can be used in place of balanced trees. Skip lists use probabilistic balancing rather than strictly enforced balancing and as a result the algorithms for insertion and deletion in skip lists are much simpler and significantly faster than equivalent algorithms for balanced trees.


The verdict is not as clear as Bill says, but the main reason behind using skip lists in the current implementation is that there are no known efficient lock-free insertion and deletion algorithms for search trees (refer JavaDoc for the class). The class uses a two-dimensionally linked skip list implementation where the base list nodes (holding key and data) form a separate level than the index nodes.

Lock-Free Using CAS Magic

Any non-blocking implementation has a core loop, since the compareAndSet() method relies on the fact that one of the threads trying to access the shared resource will complete. Here is the snippet from Brian Goetz article (look at the increment() method of the counter) :

public class NonblockingCounter {
  private AtomicInteger value;

  public int getValue() {
    return value.get();
  }

  public int increment() {
    int v;
    do {
      v = value.get();
    }
    while (!value.compareAndSet(v, v + 1));
    return v + 1;
  }
}


Similarly, the implementation methods in ConcurrentSkipListMap all have a basic loop in order to ensure a consistent snapshot of the three node structure (node, predecessor and successor). Repeated traversal is required in this case because the 3-node snapshot may have been rendered inconsistent by some other threads either through deletion of the node itself or through removal of any of its adjaccent nodes. This is typical CAS coding and can be found in implementation methods doPut(), doRemove(), findNode() etc.

Handling Deletion

The original designers of these algorithms for list based sets used mark-bits and lazy removal for deletion of nodes. Doug Lea made a clever improvization here to use a marker node (with direct CAS'able next pointers) instead, which will work faster in the GC environment. He however, retains the key technique of marking the next pointer of a deleted node in order to prevent a concurrent insertion. Here's the sequence of actions that take place in a delete :

  1. Locate the node (n)

  2. CAS n's value to null

  3. CAS n's next pointer to point to a marker node

  4. CAS n's predecessor's next pointer over n and the marker

  5. Adjust index nodes and head index level


Any failure can lead to either of the following consequences :

  1. Simple retry when the current thread has lost to a race with another competing thread

  2. Some other thread traversing the list hits upon the null value and helps out the marking / unlinking part


The interesting point is that in either case we have progress, which is the basic claim of the CAS based non-blocking approach. Harris and Maged has all the gory details of this technique documented here and here.

Postscript

The code for the implementation of ConcurrentSkipListMap is indeed very complex.Firstly it deals with a multilevel probabilistic data structure (Skip List) and secondly it makes that piece concurrent using lock-free techniques of CAS. But, on the whole, for anyone who enjoys learning data structure implementations, this will definitely be a very good learning experience. The devil is in the details - it could not have been more true than this exquisite piece from Doug Lea!

Monday, July 24, 2006

From Java to Ruby ? Now ? Naah ..

Bruce Tate has recently got his From Java to Ruby out. This is another one in series of those publications which professes Ruby as the successor of Java in the enterprise. The book is targeted towards the technical managers who can stand by the decision of their enlightened programmers of making the royal switch in the enterprise and take this decision upwards within the organization (without getting fired !). I have not yet read Tate's book, but I thoroughly enjoyed reading his Beyond Java. This entry is the blueprint of my thoughts on this subject - will Java be replaced by Ruby in the enterprise today ?

Cedric has already voiced his opinion on this subject in one of his usual opinionated (Rails is also opinionated - right ?) posts. I think I have a couple to add to his list ..

Scaling Up with Ruby

One of the major areas of my concern with Ruby being mainstream is the skillset scalability of the enterprise. The programming force, at large, is now baked in the realm of the supposedly (im)pure OO paradigms of Java. Call it the Perils of Java Schools, the lack of appreciation for the elegance of functional programming or whatever, the fact is that the zillions of developers today are used to program with assignments and iterators, as they are idiomed in Java and C++. It will take quite a beating to infuse into them Why FP Matters.

Ruby is elegant, Ruby blocks are cool, Ruby has Continuations and Ruby offers coroutine based solution for the same-fringe problem. But, again, there ain't no such thing as a free lunch ! You have to develop your workforce to take good care of this elegance in developing enterprise scale applications. The following is an example of the paradigm shift, shamelessly ripped from Bruce Tate's Beyond Java :

The developers in my workforce are used to writing JDBC-style access in Spring using anonymous inner classes :

JdbcTemplate template = new JdbcTemplate(dataSource);
final List names = new LinkedList();

template.query("select user.name from user",
    new RowCallbackHandler() {
      public void processRow(ResultSet rs) throws SQLException {
        names.add(rs.getString(1));
      }
    }
);


Here is a Ruby snippet implementing similar functionality through "blocks" ..

dbh.select_all("select name, category from animal") do |row|
    names << row[0]
end


A real gem - but the developers have to get used to this entirely new paradigm. It is not only a syntactical change, it implies a new thought process on part of the developer. Remember, one of the reasons why Java could smartly rip apart the C++ community was that it was a look-alike language with a cleaner memory model and a closer affiliation to the Internet. At one point in time, we all thought that SmallTalk had an equal chance of gobbling up the C++ programming fraternity. Smalltalk is a much purer OO language, but proved to be too elegant to be adopted en-masse.

Martin Fowler and Bruce Tate have been evangelizing Ruby, DHH has presented us with a masterfully elegant framework (ROR). But we need more resources to scale up - more books, more tutorials, more evangelism on the idioms of Ruby, which have gone behind the mastery of ROR.

The Art of Ruby Metaprogramming

Metaprogramming is the second habit of Ruby programmers (possibly after "Ruby Blocks"). Many of the problems that we face today due to the lack of formal AOP in Ruby can be addressed by metaprogramming principles. In fact, metaprogramming offers much more "raw" power than AOP, as can be very well illustrated by the following method from Rails validation ..



def validates_presence_of(*attr_names)
  configuration = { :message => ActiveRecord::Errors.default_error_messages[:blank], :on => :save }
  configuration.update(attr_names.pop) if attr_names.last.is_a?(Hash)

  # can't use validates_each here, because it cannot cope with nonexistent attributes,
  # while errors.add_on_empty can
  attr_names.each do |attr_name|
    send(validation_method(configuration[:on])) do |record|
      unless configuration[:if] and not evaluate_condition(configuration[:if], record)
        record.errors.add_on_blank(attr_name,configuration[:message])
      end
    end
  end
end



But this also reflects upon my earlier concern - the programmers have to be developed to cope with this kind of semantics in their programming. Many of the metaprogramming techniques have become idioms in Ruby - we need to have more preachings, professing their uses and best practices to the programming community. Otherwise Ruby Metaprogramming will remain a black magic for ever.

Final Thoughts

Rails may be the killer app, metaprogramming may be the killer technique, but we all need to be more pragmatic about Ruby's chance in the enterprise. There are performance concerns for Rails, the model that it adopts for ORM is divergent from the one that we do for Java and definitely not the one that can back up a solid object oriented domain model. It is debatable whether this will be a better fit for enterprise applications - but the community needs to tune the framework constantly if it were to compete with the ageold competencies of Java. With Java 5, we have a JVM which has been tuned for the last 10 years, we have killer libraries for concurrency (I heard it is capable of competing with raw C!) and we have oodles of goodies to make Java programming compete with the best of the breed performant systems. We have Mustang and Dolphin ready to make their impact on the enterprise world. It is definitely worth looking forward to whether the elegance of Ruby can scale up to the realities and give Sun (and the entire Java community) a run for their money.

Sunday, July 16, 2006

DAOs on Steroids - Fun Unlimited with Generic DAOs in Spring 2.0

I have already blogged a lot on DAOs in general and generic DAOs in particular (see here, here and here). All the entries are enough to prove that generic DAOs provide better engineering than boilerplate DAOs, resulting in a substantial lesser amount of code. This entry rounds up all of my thoughts on generic DAOs and how their deployment with the Spring 2.0 container results in a seamless injection into the domain model of an application. Without much ado, buckle up for the fun-unlimited in the roller coaster ride of the DAO world.

Generic DAO Abstraction

At the risk of repetition, let me recall the generic abstraction for a DAO - the DAO is parameterized on the class DomainBase, the base class for all domain objects. In case u want to use DTOs, the parameterization can also be done on DTOBase as well. Currently let us assume that we will use the rich domain model without the behaviourless DTOs.

public abstract class DAOBase<T extends DomainBase> {
  // The underlying implementation.
  private DAOImplBase<T> daoImpl;
  ...
}


The above abstraction uses the pimpl idiom and the Bridge pattern to decouple the abstraction from the implementation. The implementation can be based on JDBC, Hibernate or JPA and can be switched flexibly, without an iota of impact on the client codebase. Cool!

The concrete DAO extends the abstract DAO :

public class EmployeeDAO<T extends DomainBase> extends DAOBase<T> {
  ...
}


The concrete DAO can be kept generic as well, in order to allow the user to instantiate the generic DAO with multiple domain abstractions. If u want to keep it simple, you can make the concrete DAO a non-generic implementation as well ..

public class EmployeeDAO extends DAOBase<Employee> {
  ...
}


The Bridge

The abstraction DAOBase<T> delegates all DAO methods to the implementation, which is concretized based on the implementation platform - Hibernate, JPA, or vanilla JDBC.

public abstract class DAOBase<T extends DomainBase> {
  ...
  // sample finder
  public final <Context> List<T> find(
        Context ctx,
        ICriteria cri,
        Class<T> clazz) {

    return daoImpl.read(ctx,
      getTableName(),
      AndCriteria.getInstance(
      getJoinCondition(),
      cri),
      clazz);
  }
  ...
}


The implementation leg of the bridge gives concrete implementation for every specific platform ..

// JDBC based implementation
public class DAOJDBCImpl<T extends DomainBase>
    extends DAOImplBase<T> {
  ...

  @Override
  public <Context> List<T> find(Context ctx,
        String tableName,
        ICriteria cri,
        Class<T> clazz) {

    try {
    Connection conn = (Connection) ctx;

    SelectStatement stmt = new SelectStatement()
        .setFromClause(tableName)
        .setSelectClause(" * ")
        .setWhereClause(cri);

    List<T> result =
      QueryBuilderUtils.query(conn, stmt.toSelectString(), clazz, 0);
    return result;
    } catch (SQLException e) {
      throw new DataAccessException(e);
    }
  }
  ...
}


Similarly for JPA based implementation ..

// JPA based implementation
public class DAOJPAImpl<T extends DomainBase>
      extends DAOImplBase<T> {
  ...
}


Concrete DAOs

Now that the base abstraction DAOBase provides the contract and the implementation hierarchy provides generic implementation for all the DAO methods, the concrete DAOs only have to provide the table specific information that will be used by the typed interfaces during runtime.

Here's a minimalist implementation ..



public class EmployeeDAO<T extends DTOBase>
      extends DAOBase<T> {

  public enum ColumnNames {
    // Enum for column names.
  }

  /**
   * Returns a list of the column names.
   * @return list of column names.
   */
  protected List<String> getColumnNames() {
    ...
  }

  /**
   * Subclasses must override and provide the TABLE_NAME
   * that the bean is associated with.
   *
   * @return the table name.
   */
  public String getTableName() {
    return "EMPLOYEE";
  }

  /**
   * {@inheritDoc}.
   */
  protected ICriteria getPrimaryKeyWhereClause(T employee) {
    ...
  }
}



Deploy in Spring

The creation of DAOs can be controlled through the Spring IoC - all DAOs will be singleton beans, lazily loaded ..

<bean id="empdao"
  class="org.dg.gen.EmployeeDAO"
  lazy-init="true"
</bean>


Quite straightforward - uh!

In case of generic concreate DAOs, the EmployeeDAO<Employee> will have to be instantiated through a static factory method ..

public class EmployeeDAO<T extends DomainBase> extends DAOBase<T> {
  ...
  public static EmployeeDAO<Employee> makeDAO() {
    return new EmployeeDAO<Employee>();
  }
}


No problem .. add the factory method in configuration and the Spring managed bean lookup-method-injection magic takes care of the rest.

<bean id="empdao"
  class="org.dg.gen.EmployeeDAO"
  lazy-init="true"
  factory-method="makeDAO">
</bean>


The DAO needs to be wired with the domain object, since the rich domain object may need to query the database using the DAO. Spring 2.0 to the rescue - Spruce Up Your Domain Model and inject the DAO within it through 2.0 magic ..

@Configurable
public class Employee extends DomainBase {

  private EmployeeDAO<Employee> dao;

  public void setDao(EmployeeDAO<Employee> dao) {
    this.dao = dao;
  }
  ...
}


Remember the Employee domain object will not be instantiated by Spring - yet the setter injection works, courtesy the @Configurable annotation. And of course the following addition in the configuration ..

<aop:spring-configured/>
<bean class="org.dg.gen.Employee"
  singleton="false">
  <property name="dao"><ref bean="empdao"/></property>
</bean>


This is Spring 2.0 dessert course - more elaborated here.

Excitement with AOP Introductions

The Spring AOP Introductions magic helps u to add functionality to an existing DAO by wrapping it in a proxy and defining new interfaces to be implemented. This article describes how introductions are used to add a number of custom finder methods to DAO implementations specific to each domain object. This is a real treat that Spring AOP offers to customize your DAOs individually on top of generic abstractions. The custom functionality can be implemented as separate classes / interfaces and injected specifically to selected DAOs as u need 'em. The best part is that all of this machinery works non-invasively - your individual concrete DAOs can still be generated through your MOJOs (as maven plugins), yet you can add specific functionality of custom interfaces injected through the magic of Spring AOP.

As the DeveloperWorks paper suggests, all u have to do to integrate this AOP machinery into ur application's configuration file :


  1. Define a bean for custom org.springframework.aop.IntroductionAdvisor for handling additional methods that u would like to introduce to ur DAO

  2. Define a bean for the target of interception, which the proxy will wrap. Make it abstract to enable reuse of the definition in specific DAOs

  3. Define the proxy of the class org.springframework.aop.framework.ProxyFactoryBean, which will wrap the target of above step

  4. Finally add the bean for the specific DAO along with the proxy interfaces that it needs to implement



For the full and final elaboration, have a look at the excellent treatment of this subject in the paper mentioned above.

Monday, July 10, 2006

Spring Web Flow - A Declarative Web Controller Architecture

In the war for web application frameworks, the buzzword (or buzzphrase) is state management for conversational applications. Soon after JBoss released Seam, I had blogged about its state management capabilities using contextual components. At that point in time I did not have a look at the corresponding Spring offering and Keith was quick to point me out to Spring Web Flow, which also offers similar functionalities in modeling conversational Web applications.

I must admit that I am an ardent Spring fan, not because it is the most popular IoC container out there. The feature of the Spring application framework that appeals to me most is the non-intrusive / non-invasive approach which it evangelizes, encouraging developers to declare injection points externally into pre-built components, rather than forcing them to adopt their own. When I started digging into Spring Web Flow, I was not surprised that the same theme has been applied in this controller architecture as well. In fact, it is very well possible to implement a Spring Web Flow application without any Spring Web Flow specific Java code - it can be all POJOs injected via DI in user defined scoped flows and totally agnostic about the web stack above.

Flow is the King

When I started fiddling with Spring Web Flow, the first thing which struck me is the level of abstraction at which a Flow has been modeled. In a controller architecture, a flow has to be the central abstraction and all other functionalities revolve around a flow. Keith has expressed it so well in his blog :

Since a Flow is a first-class object (effectively an mini application module), you can interact with it, message it, store it, observe it, intercept it, etc. This is very much a contrast to traditional controller frameworks where there is simply no first-class concept of a "Flow"; rather, there are only a bunch of independent actions (or pages) whose executions are driven by the browser.


Technology Agnostic Abstractions

The abstractions of Spring Web Flow are not coupled with specific protocols like Http Servlet APIs. Instead they have an extra level of indirection in the abstraction layer, which makes integration with other technologies (like Struts, JSF and Tapestry) quite straightforward.

public interface FlowExecution {
  public ViewDescriptor start(Event startingEvent);
  public ViewDescriptor signalEvent(Event event);
}

public interface Action {
  public Event execute(RequestContext context);
}


In the above interfaces, ViewDescriptor, Event and RequestContext are top level abstractions which can be subclassed appropriately to provide specific implementations. This is a big deviation from JBoss Seam's philosophy, which has tied the latter to EJB 3.0 and JSF - no wonder Gavin's initial intuition is that that two models simply don't make sense together.

Continuations based Flow Control

SWF offers continuations based flow control for conversational applications. This implies uniform handling of browser navigation buttons, simultaneous multiple conversations and a cool transparent Strategy based flow execution storage - all out-of-the-box! Again, what I like on this continuation support is that Spring provides the correct level of framework abstraction. The purpose of Spring is not to provide a full-blown continuation framework (like Rife, which has a different purpose in life) - SWF provides continuation for DSL based flow execution, upon which a snapshot can be saved and restored at any point. This is a clear decision that avoids over-engineering, or as Bruce Eckel would call Framework-itis.

OTOH conversational states are managed in Seam using stateful session beans and what Gavin calls Subversion of Control (Bijection = Injection + Outjection). It is an alternative approach to what SWF offers. However, I find the SWF model more intuitive to address the cross cutting concern that conversations offer.

Other Cool Features

Apart from the above specials, you have lots of cool stuff up for grabs in SWF. Here are a few samplers which caught me :


  • Conditional Transitions as part of the flow control DSL, along with a pluggable TransitionCriteria abstraction.

  • Solid support for wizard style flow cases.

  • Flow-By-Contract - support for enhanced flow lifecycle callbacks through an Observer based listener interface. You can drop-in pre-conditions and post-conditions with invariants for the flow to pass through that state.

  • Reusable sub flows as a separate abstraction.

  • Bean method binding capability while modeling your flow action.

  • Transactional Web Flows - no more hell with tokens to prevent duplicate form submission. And best part is that it works transparently for flow conversation snapshots as well. All you have to do is to turn on the transactional property in the web flow declaration in your DSL.

  • Strategy based state exception handler which u can plug in to your web flow declaration.



The distribution of SWF comes with a rich set of examples, which amply demonstrates all of the above features. The sellItem example is the richest one and has all of the killer features demonstrated. I fiddled with the distribution for a week (not as part of my regular day jobs, though) and have come across the above cool stuff. On the whole it looks to be a regular Spring delivery - framework based, patterns enriched, easy-to-use and above all, non-invasive (according to me, the coolest aspect of any Spring component).

Wednesday, July 05, 2006

Sunday, July 02, 2006

Spring 2.0 AOP - Spruce Up Your Domain Model

Just started playing around with Spring 2.0 and poke at some of the new features over the weekend. The coolest dude of them looked like the ability to attach post-instantiation processors to beans which have NOT been instantiated by the Spring container. This means a lot to me when I started thinking about how this feature can add value to the current scheme of things in a POJO based Spring configured application.

Have a look at the following domain class from our current application :

// models a security trade
public class Trade {
  // state
  // getters and setters
  // domain behavior
}

A typical domain object is instantiated in the application either from the ORM layer (read Hibernate through persistence services) or by the user using factories. It is never instantiated by the Spring container. Hence there is no way that we can use Spring DI to inject services into the domain model. Any sort of service injection that is done will be through hardwired code :

public class Trade {
  // ...
  public BigDecimal calculateAccruedInterest(...) {
    BigDecimal interest =
      InterestRateDao.getInstance().find(...);
    // ...
  }
  // ...
}

The above domain class now has a hardwired dependency on the class InterestRateDao, which brings in all sorts of unwanted side-effects :


  1. Domain layer should be persistence agnostic

  2. The domain classes should be unit-testable without the service layer

  3. Proliferation of business logic in the controllers



Let me explain a bit on the third point ..

Since I cannot have transparent DAO injection in my domain model, I cannot have my calculateAccruedInterest() method with my domain object. The inevitable impact will be to move the logic down to the controller, the lifecycle which can be configured using Spring. Now I have the controller class which computes the accrued interest of the Trade once the object has been instantiated by the persistence layer. Result ? My domain logic has started infiltrating into the controller layer which, ideally should be a facade only and strictly a *thin* glue between the domain layer and the presentation layer.

// Controller class for trade entry use case
public class TradeService {
  // ...
  public void enterTrade(...) {
    Trade tr = TradeFactory.create(...);
    // accrued interest needs to be calculated only for bond trades
    if (tr.isBondTrade()) {
      tr.setAccruedInterest(
        AccruedInterestCalculator.getInstance()
        .calculateAccruedInterest(tr));
    }
  }
  // ...
}

Design Smell !! The domain model becomes anemic and the controller layer becomes fleshy.

Enter Spring 2.0

The new AOP extensions in Spring 2.0 allow dependency injection of any object even if it has been created outside the control of the container. Our domain objects fit nicely in this category. Service objects can now be injected into domain objects so that the domain model can be enriched with domain behavior ensuring proper separation of concerns across the layers of the application architecture. The enriched domain behavior can now interact with the domain state in a more object-oriented way than the erstwhile anemic model.

Spring 2.0 offers annotation driven aspects towards this end as the most recommended approach towards dependency injection into the domain model. Let us see how the Trade class changes :

@Configurable("trade")
public class Trade {
  // ...
  public BigDecimal calculateAccruedInterest(...) {
    BigDecimal interest =
      dao.find(...);
    // ...
  }
  // ...

  // injected DAO
  private InterestRateDao dao;
  public void setDao(InterestRateDao dao) {
    this.dao = dao;
  }
}

And the usual stuff in the XML for application context :

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
...>

  <aop:spring-configured/>

  <bean id="interestRateDao"
    class="com.xxx.dao.InterestRateDaoImpl"/>

  <bean id="trade"
    class="com.xxx.Trade"
    lazy-init="true">
    <property name="dao" ref="interestRateDao" />
  </bean>
</beans>


Getaways


  • The anemic domain model is history. Fine grained DI using AOP helps domain model regain smart behavior.

  • No compromise on unit testability. For the above annotation to kick in, the annotated types must be woven with the AspectJ weaver either through a build-time ant or maven task or through load-time weaving. Do away with the AspectJ weaving and replace the property references by mock objects in XML and fire to test drive your domain model.

  • All crosscutting infrastructure services can be injected transparently into the domain model from s single point of contact.

  • In case you are allergic towards annotation (can't believe many people are!), you may also use AnnotationBeanConfigurerAspect in spring-aspects.jar (distributed with Spring) to implement the same behavior.

  • Service Objects, Factories and Repositories are typical examples of artifacts that can be injected into the domain model.