Tuesday, October 31, 2006

Domain Classes or Interfaces ?

A few days back I had initiated a thread in the mailing list of Domain Driven Design regarding the usage of pure Java interfaces as the contract for domain objects. The discussion turned out to be quite interesting with Sergio Bossa pitching in as the main supporter of using pure interfaces in the domain model. Personally I am not a big camp follower of the pure-interfaces-as-intention-revealing paradigm - however, I enjoyed the discussion with Sergio and the other participants of the group. Sergio has posted the story with his thoughts on the subject. The current entry is a view from the opposite camp and not really a java pure-interface love affair.

The entire premise of the Domain Driven Design is based upon evolving a domain model as the cornerstone of the design activity. A domain model consists of domain level abstractions, which builds upon intention revealing interfaces, built out of the Ubiquitous Language. And when we talk about abstractions, we talk about data and the associated behavior. The entire purpose behind DDD is to manage the complexity in the modeling of these abstractions, so that we have a supple design that can be carefully extended by the implementers and easily used by other clients. In the process of extension, the designer needs to ensure that the basic assumptions or behavioral constraints are never violated and the abstractions' published interfaces always honor the basic contractual framework (pre-conditions, post-conditions and invariants). Erik Evans never meant Java interfaces when he talked about intention-revealing-interfaces - what he meant was more in terms of contract or behavior to be modeled with the most appropriate artifact available in the language of implementation.

Are Java interfaces sufficiently intention-revealing ?

The only scope that the designer has to reveal the intention is through the naming of the interface and its participating methods. Unfortunately Java interfaces are not rich enough to model any constraints or aspects that can be associated with the published apis (see here for some similar stuff in C#). Without resorting to some of the non-native techniques, it is never possible to express the basic constraints that must be honored by every implementation of the interface. Let us take an example from the capital market domain :


interface IValueDateCalculator {
  Date calculateValueDate(final Date tradeDate)
      throws InvalidValueDateException;
}



The above interface is in compliance with all criteria for an intention-revealing-interface. But does it provide all the necessary constraints that an implementor need to be aware of ? How do I specify that the value-date calculated should be a business date after the trade date and must be at least three business dates ahead of the input trade-date ? Pure Java interfaces do not allow me to specify any such criteria. Annotations also cannot be of any help, since annotations on an interface do not get inherited by the implementations.

Make this an abstract class with all constraints and a suitable hook for the implementation :


abstract class ValueDateCalculator {
  public final Date calculateValueDate(final Date tradeDate)
      throws InvalidValueDateException {
    Date valueDate = doCalculateValueDate(tradeDate);
    if (DateUtils.before(valueDate, tradeDate) {
      throw new InvalidValueDateException("...");
    }
    if (DateUtils.dateDifference(valueDate, tradeDate) < 3) {
      throw new InvalidValueDateException("...");
    }
    // check other post conditions
  }

  // hook to be implemented by subclasses
  protected abstract Date doCalculateValueDate(final Date tradeDate)
      throws InvalidValueDateException;
}



The above model checks all constraints that need to be satisfied once the implementation calculates the value-date through overriding the template method. On the contrary, with pure interfaces (the first model above), in order to honor all constraints, the following alternatives are available :

  • Have an abstract class implementing the interface, which will have the constraints enforced. This results in an unnecessary indirection without any value addition to the model. The implementers are supposed to extend the abstract class (which anyway makes the interface redundant), but, hey, you cannot force them. Some adventurous soul may prefer to implement the interface directly, and send all your constraints for a toss!

  • Allow multiple implementations to proliferate each having their own versions of constraints implementations - a clear violation of DRY.

  • Leave everything to the implementers, document all constraints in Javadoc and hope for the best.



Evolving Your Domain Model

Abstract classes provide an easy evolution of the domain model. The process of domain modeling is iterative and evolutionary. Hence, once you publish your apis, you need to honor their immutability, since all published apis will potentially be used by various clients. Various schools of thought adopt different techniques towards achieving this immutability. Eclipse development team use extension of interfaces (The Extension Object Design Pattern) and evolve their design by naming extended interfaces suffixed by a number - the I*2 pattern of interface evolution. Have a look at this excellent interview with Erich Gamma for details on this scheme of evolution. While effective in some situations where you need to implement multiple inheritance, I am not a big fan of this technique for evolving my domain abstractions - firstly, this technique does not scale and secondly, it requires an instanceof check in client code, which is a code-smell, as the gurus say.

Once again, to support smooth evolution of your domain apis, you need to back your interfaces with an abstract class implementation and have the implementers program to the abstract class, and not the interface. Then what good is the interface for ?


Are Interfaces Useless in DDD ?

Certainly not. I will use pure interfaces to support the following cases :

  • Multiple inheritance, particularly mixin implementations

  • SPIs, since they will always have multiple implementations and fairly disjoint ones too. The service layer is one which is a definite candidate for interfaces. This layer needs easy mocking for testability, and interfaces fit this context like a charm.


Some of the proponents of using interfaces claim testability as a criterion for interface based design, because of the ease of mockability. Firstly, I am not sure if domain objects can be tested effectively using mocking. Mocking is most suitable for the services and SPIs and I am a strong supporter of using interfaces towards that end. Even with classes, EasyMock supports mocking using CGLIB and proxies.

Finally ...

I think abstract classes provide a much more complete vehicle for implementation of behavior rich domain abstractions. I prefer to use interfaces for the SPIs and other service layers which tend to have multiple implementations and need easy mocking and for situations where I need multiple inheritance and mixin implementations. I would love to hear what the experts have to say on this ..

Monday, October 23, 2006

Why OOP Alone in Java is Not Enough

Object-oriented languages have taught us to think in terms of objects (or nouns) and Java is yet another example of the incarnation of the noun land. When was the last time you saw an elegant piece of Swing code ? Steve Yegge is merciless when he rants about it .. and rightly so ..
Building UIs in Swing is this huge, festering gob of object instantiations and method calls. It's OOP at its absolute worst.

There are ways of making OOP smart, we have been talking about fluent interfaces, OO design patterns, AOP and higher level of abstractions similar to those of DSLs. But the real word is *productivity* and the language needs to make your user elegantly productive. Unfortunately in Java, we often find people generating reams of boilerplates (aka getters and setters) that look like pureplay copy-paste stuff. Java abstractions thrive on the evil loop of the 3 C's create-construct-call along with liberal litterings of getters and setters. You create a class, declare 5 read-write attributes and you have a pageful of code before you start throwing in a single piece of actual functionality. Object orientation procrastinates public attributes, restricts visibility of implementation details, but never prevents the language from providing elegant constructs to handle boilerplates. Ruby does this, and does it with elan.


Java is not Orthogonal

Paul Graham in On Lisp defines orthogonality of a language as follows :
An orthogonal language is one inwhich you can express a lot by combining a small number of operators in a lot of different ways.

He goes on to explain how the complement function in Lisp has got rid of half of the *if_not* funtions from the pairs like [remove-if, remove-if-not], [subst-if, subst-if-not] etc. Similarly in Ruby we can have the following orthogonal usage of the "*" operator across data types :


"Seconds/day: #{24*60*60}" will give Seconds/day: 86400
"#{'Ho! '*3}Merry Christmas!" will give Ho! Ho! Ho! Merry Christmas!


C++ supports operator overloading, which is also a minimalistic way to extend your operator usage.

In order to bring some amount of orthogonality in Java we have lots of frameworks and libraries. This is yet another problem of dealing with an impoverished language - you have a proliferation of libraries and frameworks which add unnecessary layers in your codebase and tend to collapse under their weight.

Consider the following code in Java to find a matching sub-collection based on a predicate :


class Song {
  private String name;
  ...
  ...
}

// ...
// ...
Collection<Song> songs = new ArrayList<Song>();
// ...
// populate songs
// ...
String title = ...;
Collection<Song> sub = new ArrayList<Song>();
for(Song song : songs) {
  if (song.getName().equals(title)) {
    sub.add(song);
  }
}



The Jakarta Commons Collections framework adds orthogonality by defining abstractions like Predicate, Closure, Transformer etc., along with lots of helper methods like find(), forAllDo(), select() that operate on them, which helps user do away with boilerplate iterators and for-loops. For the above example, the equivalent one will be :


Collection sub = CollectionUtils.transformedCollection(songs,
    TransformerUtils.invokerTransformer("getName"));
CollectionUtils.select(sub,
  PredicateUtils.equalPredicate(title));



Yuck !! We have got rid of the for-loop, but at the expense of ugly ugly syntax, loads of statics and type-unsafety, for which we take pride in Java. Of course, in Ruby we can do this with much more elegance and lesser code :


@songs.find {|song| title == song.name }


and this same syntax and structure will work for all sorts of collections and arrays which can be iterated. This is orthogonality.

Another classic example of non-orthogonality in Java is the treatment of arrays as compared to other collections. You can initialize an array as :


String[] animals = new String[] {"elephant", "tiger", "cat", "dog"};


while for Collections you have to fall back to the ugliness of explicit method calls :


Collection<String> animals = new ArrayList<String>();
animals.add("elephant");
animals.add("tiger");
animals.add("cat");
animals.add("dog");



Besides arrays have always been a second class citizen in the Java OO land - they support covariant subtyping (which is unsafe, hence all runtime checks have to be done), cannot be subclassed and are not extensible unlike other collection classes. A classic example of non-orthogonality.

Initialization syntax ugliness and lack of literals syntax support has been one of the major failings of Java - Steve Yegge has documented it right to its last bit.

Java and Extensibility

Being an OO language, Java supports extension of classes through inheritance. But once you define a class, there is no scope of extensibility at runtime - you cannot define additional methods or properties. AOP has been in style, of late, and has proved quite effective as an extension tool for Java abstractions. But, once again it is NOT part of the language and hence does not go to enrich the Java language semantics. There is no meta-programming support which can make Java friendlier for DSL adoption. Look at this excellent example from this recent blogpost :

Creating some test data for building a tree, the Java way :


Tree a = new Tree("a");

Tree b = new Tree("b");
Tree c = new Tree("c");
a.addChild(b);
a.addChild(c);

Tree d = new Tree("d");
Tree e = new Tree("e");
b.addChild(d);
b.addchild(e);

Tree f = new Tree("f");
Tree g = new Tree("g");
Tree h = new Tree("h");
c.addChild(f);
c.addChild(g);
c.addChild(h);



and the Ruby way :


tree = a {
      b { d e }
      c { f g h }
    }



It is really this simple - of course you have the meta-programming engine backing you for creating this DSL. What this implies is that, with Ruby you can extend the language to define your own DSL and make it usable for your specific problem at hand.

Java Needs More Syntactic Sugars

Any Turing complete programming language has the ability to allow programmers implement similar functionalities. Java is a Turing complete language, but still does not boost enough programmer's productivity. Brevity of the language is an important feature and modern day languages like Ruby and Scala offer a lot in that respect. Syntactic sugars are just as important in making programmers feel concise about the implementation. Over the last year or so, we have seen lots of syntactic sugars being added to C# in the forms of Anomymous Methods, Lambdas, Expression Trees and Extension Methods. I think Java is lagging behind a lot in this respect. The smart for-loop is an example in the right direction. But Sun will do the Java community a world of good in offering other syntactic sugars like automatic accessors, closures and lambdas.

Proliferation of Libraries

In order to combat Java's shortcomings at complexity management, over the last five years or so, we have seen the proliferation of a large number of libraries and frameworks, that claim to improve programmer's productivity. I gave an example above, which proves that there is no substitute for language elegance. These so called productivity enhancing tools are added layers on top of the language core and have been mostly delivered as generic ones which solve generic problems. There you are .. a definite case of Frameworkitis. Boy, I need to solve this particular problem - why should I incur the overhead of all the generic implementations. Think DSL, my language should allow me to carve out a domain specific solution using a domain specific language. This is where Paul Graham positions Lisp as a programmable programming language. I am not telling all Java libraries are crap, believe me, some of them really rocks, java.util.concurrent is one of the most significant value additions to Java ever and AOP is the closest approximation to meta-programming in Java. Still I feel many of them would not have been there, had Java been more extensible.

Is it Really Static Typing ?

I have been thinking really hard about this issue of lack of programmer productivity with Java - is static typing the main issue ? Or the lack of meta-programming features and the ability that languages like Ruby and Lisp offer to treat code and data interchangeably. I think it is a combination of both the features - besides Java does not support first class functions, it doesn't have Closures as yet and does not have some of the other productivity tools like parallel assignment, multiple return values, user-defined operators, continuations etc. that make a programmer happy. Look at Scala today - it definitely has all of them, and also supports static typing as well.


In one of the enterprise Java projects that we are executing, the Maven repository has reams of third party jars (mostly open source) that claim to do a better job of complexity management. I know Ruby is not enterprise ready, Lisp never claimed to deliver performance in a typical enterprise business application, Java does the best under the current circumstances. And the strongest point of Java is the JVM, possibly the best under the Sun. Initiatives like Rhino integration, JRuby and Jython are definitely in the right direction - we all would love to see the JVM evolving as a friendly nest of the dynamic languages. The other day, I was listening to the Gilad Bracha session on "Dynamically Typed Languages on the Java Platform" delivered in Lang .NET 2006. He discussed about invokedynamic bytecode and hotswapping to be implemented in the near future on the JVM. Possibly this is the future of the Java computing platform - it's the JVM that holds more promise for the future than the core Java programming language.

Monday, October 09, 2006

AOP : Writing Expressive Pointcuts

Aha .. yet another rant on AOP and pointcuts, this time expressing some of the concerns with the most important aspect of aspects - the Pointcut Descriptors. In order for aspects to be a first class citizen of the domain modeling community, pointcut descriptors will have to be much more expressive than what they are today in AspectJ. Taking an example from one of the threads in "aspectj-users" forum, AOP expert Dean Wampler himself had made a mistake between call( @MyAnnotation *.new(..) ) and call( (@MyAnnotation *).new(..) ), while answering a query from another user. While the former pointcut matches all constructors annotated with @MyAnnotation, the latter matches constructors in classes where the class itself has the same annotation.

This is, at best, confusing - the syntax is not expressive and liberal sprinkling of position dependent wild card characters pose a real challenge to the beginners of AspectJ. Dean has some suggestions in his blog for making pointcut languages more expressive - as Dean has pointed out, the solution is to move towards a flexible DSL for writing pointcuts in AspectJ.

What we write today as :


execution(public !static * *(..))


can be expressed more effectively as :


execution(
  method()
  .access($public)
  .specifier(!$static)
)



The experts need to work out the complete DSL to make life easier for the beginners.


Pointcuts can be Intrusive

If not properly designed, pointcuts can directly bite into the implementation of abstractions. Consider the following example from the classic An Overview of AspectJ paper by Kiczales et. al. :


interface FigureElement {
  void incrXY(int x, int y);
}

class Point implements FigureElement {
  int x, y;
  // ...
}



Now consider the following two pointcuts :


get(int Point.x) || get(int Point.y)


and


get(* Shape+.*)


Both the above pointcuts match the same set of join points. But the first one directly intrudes into the implementation of the abstraction accessing the private fields, while the latter is based only on the interface. While both of them will have the same effect in the current implementation, but certainly, the first one violates the principle of "programming to the interface" and hence is not modular and scalable. While pointcuts have the raw power to cut into any levels of abstraction and inject advice transparently, care should be taken to make these pointcuts honor the ageold abstraction principles of the object oriented paradigm. As Bertrand Meyer has noted about OO contracts, pointcuts should also be pushed up the inheritance hierarchy in order to ensure maximal reusability.

Jonas Boner, while talking about invasive pointcuts has expressed this succinctly in his blog :
A pointcut can be seen as an implicit contract between the target code and the artifact that is using the pointcut (could be an aspect or an interceptor).
One problem with using patterns like this is that we are basing the implicit contract on implementation details, details are likely to change during the lifetime of the application. This is becoming an even bigger problem with the popularity of agile software development methodologies (like XP, Scrum, TDD etc.), with a high focus on refactoring and responsiveness to customer ever-changing requirements.


Metadata for Expressiveness

Ramnivas Laddad has talked about metadata as a multidimensional signature and has described annotations as a vehicle to prevent signature tangling and express any data associated with your code's crosscutting concerns. While annotations make code much more readable, but it is a compromise on one of the most professed principles of AOP - obliviousness. Annotations (and other mechanisms) can also be used to constrain advice execution on classes and interfaces. There have also been suggestions to have classes and interfaces explicitly restrict aspects or publish pointcuts. All of these, while publishing much more powerful interfaces for abstractions, will inherently limit the obliviousness property of AOP. See here for more details.

Use metadata to enhance the artifact being annotated, but the enhancement should be horizontal and NOT orthogonal. e.g. a domain model should always be annotated with domain level metadata and, as Jonas has rightly pointed out, it is equally important to use the Ubiquitous Language for annotating domain artifacts. Taking cue from the example Ramnivas has cited :


@Transactional
@Authorized
public void credit(float amount);



If the method credit() belongs to the domain model, it should never be annotated with service level annotations like @Transactional and @Authorized. These annotations go into service layer abstractions - the domain layer can contain only domain level metadata. Accordingly the pointcut processing for domain layers should not contain service layer functionality.

Tuesday, October 03, 2006

Agile Blast

Steve Yeggey @ Google has blasted Agile and InfoQ has carried a significant post on it. Rants like these sell like hot cakes and, no wonder, the last time I checked Steve's blogs, I found 161 comments posted against it. Martin Fowler has posted a quiet, but convincing dictum in his bliki regarding his agile practices in Thoughtworks. Of course, Martin's post contains no reference to Google Agile practices of Steve - but the timing is significant.

Any practice, done the wrong way is bad, and Agile is no exception. The Agile Manifesto never talks about imposition, never dictates any forceful action from the upper management - it talks about individuals, interactions and collaborations. It's never an enforcement of RigorousAgile.

We have been practicing many of the principles of agile methodology in Anshinsoft in our offshore software development model in India. To a large extent we have been quite satisfied with the results. We do *not* do pair programming, but we follow principles like customer collaboration, short iterative model of development, merciless refactoring, early builds and short release cycles. Developing in collaboration with the client team, 10,000 miles and 12 hour timezones away, these have worked out great for us.

Steve has mentioned about many of the Google practices. We need to understand that Google hires its staff after a very thorough and careful screening process, has a completely different business model and does not have to think about the red-faced fuming client hammering for the red dots of the project dashboard late at night. So whatever is Google Agile, cannot be applied to houses that deliver run-of-the-mill project solutions at nickel-a-line-of-code priceline.

Here are some of the other Yegge rants ..

- there are managers, sort of, but most of them code at least half-time, making them more like tech leads.


In a typical project, the project manager has to do external client management and keep all stakeholders updated about the project dashboard. Managers coding half of the time simply do not work in a large enterprise scale development project. Well, once again it may be a Google specialization, but for people working further down the intellectual curve, it's all bricks and mortars - managers need to work collaboratively with a very strong client facing role.

- developers can switch teams and/or projects any time they want, no questions asked; just say the word and the movers will show up the next day to put you in your new office with your new team.


A real joke when you are delivering a time critical project to your client. Again it's Google Agile, but definitely not applicable to the business model in which the lesser mortals thrive on.

- there aren't Gantt charts or date-task-owner spreadsheets or any other visible project-management artifacts in evidence, not that I've ever seen.


When you don't have the deadlines and the client manager sniffing at your project dashboard, you can indulge in creativity-unlimited - sorry folks, no place for this one too in my delivery model.

The agile methodology does not force you to use (or not use) Gantt charts or excel sheets for project management. It's all about adding flexibility and making your process easier to manage and make teams drift away from the reams of useless documentation. Agility, practiced bad is Bad Agile, but one model does not fit all and Google Agile is not for the general mass to follow.

Aspect Days are Here Again ..

and it's raining pointcuts in Spring. It is the most discussed about topic in the blogs and forums that Spring 2.0 will support AspectJ 5. I have been tickling with aspects for some time in the recent past and have also applied AOP in production environments. I have narrated that experience in InfoQ where I had used AOP to implement application level failover over database and MOM infrastructures. This post will be all about the afterthoughts of my continuing experiments with AOP as a first class modeling artifact in your OO design.

First Tryst with Domain Aspects

I am tired of looking at aspects for logging, tracing, auditing and profiling an application. With the new AspectJ 5 and Spring integration, you can do all sorts of DI and wiring on aspects using the most popular IoC container. Spring's own @Transactional and @Configurable are great examples of AOP under the hoods. However, I always kept on asking Show me the Domain Aspects, since I always thought that in order to make aspects a first class citizen in modeling enterprise applications, it has to participate in the domain model.

In one of our applications, we had a strategy for price calculation which worked with the usual model of injecting the implementation through the Spring container.


<beans>
  <bean id="defaultStrategy" class="org.dg.domain.DefaultPricing"/>

  <bean id="priceCalculation" class="org.dg.domain.PriceCalculation">
    <property name="strategy">
      <ref bean="defaultStrategy"/>
    </property>
  </bean>
</beans>


Things worked like a charm, till in one of the deployments, the client came back with the demand for implementing strategy failovers. The default implementation will continue to work as the base case, while in the event of failures, we need to iterate over a collection of strategies till one gets back with a valid result. Being a one of a kind of request, we decided NOT to change the base class and the base logic of strategy selection. Instead we chose to have a non-invasive way of handling the client request by implementing pricing strategy alternatives through a domain level aspect.


public aspect CalculationStrategySelector {

  private List<ICalculationStrategy> strategies;

  public void setStrategies(List<ICalculationStrategy> strategies) {
    this.strategies = strategies;
  }

  pointcut inCalculate(PriceCalculation calc)
    : execution(* PriceCalculation.calculate(..)) && this(calc);

  Object around(PriceCalculation calc)
    : inCalculate(calc) {
      int i = 0;
      int maxRetryCount = strategies.size();
      while (true) {
        try {
          return proceed(calc);
        } catch (Exception ex) {
        if ( i < maxRetryCount) {
          calc.setStrategy(getAlternativeStrategy(i++));
        } else {
          // handle exceptions
        }
      }
    }
  }

  private ICalculationStrategy getAlternativeStrategy(int index) {
    return strategies.get(index);
  }
}



And the options for the selector were configured in the configuration xml of Spring ..


<beans>
  <bean id="strategySelector"
    class="org.dg.domain.CalculationStrategySelector"
    factory-method="aspectOf">
    <property name="strategies">
      <list>
        <ref bean="customStrategy1"/>
        <ref bean="customStrategy2"/>
      </list>
    </property>
  </bean>

  <bean id="customStrategy1" class="org.dg.domain.CustomCalculationStrategy1"/>
  <bean id="customStrategy2" class="org.dg.domain.CustomCalculationStrategy2"/>
</beans>


The custom selectors kicked in only when the default strategy fails. Thanks to AOP, we could handle this problem completely non-invasively without any impact on existing codebase.

And you can hide your complexities too ..

Aspects provide a great vehicle to encapsulate many complexities out of your development team. While going through Brian Goetz's Java Concurrency In Practice, I found one snippet which can be used to test your code for concurrency. My development team has just been promoted to Java 5 features and not all of them are enlightened with the nuances of java.util.concurrent. The best way that I could expose the services of this new utility was through an aspect.

The following snippet is a class TestHarness and is replicated shamelessly from JCIP ..


package org.dg.domain.concurrent;

import java.util.concurrent.CountDownLatch;

public class TestHarness {
  public long timeTasks(int nThreads, final Runnable task)
    throws InterruptedException {
    final CountDownLatch startGate = new CountDownLatch(1);
    final CountDownLatch endGate = new CountDownLatch(nThreads);

    for(int i = 0; i < nThreads; ++i) {
      Thread t = new Thread() {
        public void run() {
          try {
            startGate.await();
            try {
              task.run();
            } finally {
              endGate.countDown();
            }
          } catch (InterruptedException ignored) {}
        }
      };
      t.start();
    }

    long start = System.nanoTime();
    startGate.countDown();
    endGate.await();
    long end = System.nanoTime();
    return end - start;
  }
}


My target was to allow my developers to write concurrency test codes as follows ..


public class Task implements Closure {

  @Parallel(5) public void execute(Object arg0) {
    // logic
    // ..
  }
}



The annotation @Parallel(5) indicates that this method need to be run concurrently in 5 threads. The implementation of the annotation is trivial ..


import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface Parallel {
  int value();
}


The interesting part is the main aspect which implements the processing of the annotation in AspectJ 5 .. Note the join point matching based on annotations and the context exposure to get the number of threads to use for processing.


public aspect Concurrency {
  pointcut parallelExecutionJoinPoint(final Parallel par) :
    execution(@Parallel public void *.execute(..)) && @annotation(par);

  void around(final Parallel par) : parallelExecutionJoinPoint(par) {

    try {
      long elapsed =
        new TestHarness().timeTasks(par.value(),
        new Runnable() {
          public void run() {
            proceed(par);
          }
        } );
      System.out.println("elapsed time = " + elapsed);
    } catch (InterruptedException ex) {
      // ...
    }
  }
}


The above example shows how you can build some nifty tools, which your developers will love to use. You can shield them from all complexities of the implementation and provide them the great feature of using annotations from their client code. Under the hoods, of course, it is AspectJ doing all the bigwigs.

In some of the future postings, I will bring out many of my encounters with aspects. I think we are in for an aspect awakening and the very fact that it is being backed by Spring, will make it a double whammy !