Wednesday, January 30, 2008

Hitting the Sweet Spot

Do you have to be a better X (for all X mainstream) to be a successful mainstream programming language ? Smalltalk lost out to C++ back in the 80s even though Smalltalk had a purer object model (Objects all the way down) with lots of powerful abstractions, espoused the virtues of garbage collection, byte codes and JIT (only later to be hijacked by Java) and provided a solid refactoring browser based IDE. On the other hand, C++ was positioned to be a better C and had played upon the familiarity cards of having the curly brace syntax, syntactic compatibility with C but with better type-safety. Even today, we find enough impact that Smalltalk, both as a language and as a platform, has in the market. Ruby is strongly influenced by Smalltalk - even many dynamic language gurus feel that Ruby should be made to run on the highly optimized Strongtalk VM rather than labour through the process of carving out its own or try to make it run on the JVM through JRuby. Gemstone's object server runs Smalltalk and provides a state-of-the-art platform for developing, deploying, and managing scalable, high-performance, multi-tier applications based on business objects. Recently announced new Web programming environment from Sun Labs, Lively Kernel was inspired in part by the success of the Squeak Smalltalk programming environment.

Why did Smalltalk lose out to C++ ?

Eventually Java hit the sweetest of spots as a better and easy-to-use C++. Java adapted the Smalltalk VM and roped in the same features that people rejected with Smalltalk in the 80s. The only difference that Java created was that the community focused on building a strong ecosystem to support the average programmer better than any of its predecessors. This included richer libraries and frameworks, great tooling, a uniform runtime environment, JITs that generated efficient code and of course a very warm and supportive community participation. Lawrence Kesteloot makes a strong point when he emphasizes that helping the average programmer creates the necessary strength and durability of the ecosystem for the language to thrive.

Enterprise projects thrive on the ecosystem.

No matter how elegant your language is, unless you have a strong ecosystem that lives up to the demand / supply economics of developing enterprise software, it will not be able to move into the ranks of BigCo projects. Even the most beautiful piece of code that you may write has an IDE life directly proportional to the skillset of the programmer who will maintain it. Of course there are plenty of good programmers working with the BigCo enterprise projects, but it is the machinery assuring a copius supply of average programmers that keeps the economics ticking.

And only one language has so far been able to create this ecosystem !

Monday, January 21, 2008

Scala can make your Java Objects look smarter

It's really a strange situation out there. On one hand, people are advocating for polyglot programming over the JVM or CLR, and on the other hand, the same set of advocates are busy protecting their own afficionados with great care, wisdom and frequent aggression, in the race for leadership as the dominant linguistic force in tomorrow's application development stack. Groovy or JRuby, Grails or Rails, Java or Scala - the blogosphere today have started a combinatorial explosion of mutual vilification programs. The most surprising part of this series is that almost everyone is talking about X or Y either in the stable layer or in the DSL layer. My question is why not X AND Y ?

Ola Bini talks about a layered architecture, with a stable, typesafe, performant kernel, providing the foundation for a malleable DSL-based end-user API. To me this makes a lot of sense, at least in the context of the large codebase of existing Java applications. Language adoption in the mainstream has traditionally been evolutionary and for quite some time we will see lots of layers and wrappers being written and exposed as flexible APIs in the newer languages, over today's mainstream codebases acting as the so-called kernel.

Java is still the most dominant of these forces in application development, particularly in the enterprise segment. And given the degree of penetration that Java, the language, has enjoyed over the last decade or so, coupled with the strong ecosystem that it has been able to create, it will possibly remain the most potent candidate for what Ola Bini calls the kernel layer.

With more and more mindset converging towards new languages on the JVM, albeit in blogs or toy applications, I am sure many of them have already started thinking about polyglot programming paradigms. In an existing application, with the basic abstractions implemented in Java, can we use one of its newer cousins to design more dynamic APIs ? With the toolsets and libraries still in nascent state of evolution, it will be quite some time (if ever at all) that some BigCo decides to develop a complete enterprise application in JRuby or Scala or Groovy. After all, the BigCos are least excited with Ruby evals, Groovy builders or Scala type inferencing. It is the ecosystem that matters to them, it is the availability of programmers that counts for them and it is the comfort factor of their IT managers that calls the shots and decides on the development platform.

Anyway, this is not an enterprise application. I tried to build a layer of Scala APIs on top of some of the utilitarian Java classes in a medium sized application. It was fun, but more importantly, it showed how better APIs evolve easily once you can work with more powerful interoperable languages on a common runtime.

Consider this simplified example Java class, which has worked so faithfully for my client over the last couple of years ..


// Account.java
public class Account {
  private List<String> names;
  private String number;
  private List<Address> addresses;
  private BigDecimal interest;
  private Status status;
  private AccountType accountType;

  public Account(String name, String number, Address address) {
    //..
  }

  //.. standard getters

  public void calculate(BigDecimal precision, Calculator c) {
    interest = c.calculate(this, precision);
  }

  public boolean isOpen() {
    return status.equals(Status.OPEN);
  }
}



Nothing complex, usual verbose stuff, encapsulating the domain model that our client has been using for years ..

I thought it may be time for a facelift with some smarter APIs for the clients of this class, keeping the core logic untouched.

Let us have a Scala class which will act as an adapter to the existing Java class. Later we will find out how some of the magic of Scala *implicits* enables us to use the adaptee seamlessly with the adaptor.

// scala class: RichAccount.scala
// takes the Java object to construct the rich Scala object
class RichAccount(value: Account) {
  //..
}


Scala collections are much richer than Java ones - hence it makes sense to expose the collection members as Scala Lists. Later we will find out how the client can use these richer data structures to cook up some more dynamic functionalities.


class RichAccount(value: Account) {
  //..
  def names =
    (new BufferWrapper[String] {
      def underlying = value.getNames()
    }).toList

  def addresses =
    (new BufferWrapper[Address] {
      def underlying = value.getAddresses()
    }).toList

  def interests =
    (new BufferWrapper[java.math.BigDecimal] {
      def underlying = value.getInterests()
    }).toList
}



Now with the properties exposed as Scala lists, we can cook up some more elegant APIs, using the added power of Scala collections and comprehensions.


class RichAccount(value: Account) {
  //..as above

  // check if the account belongs to a particular name
  def belongsTo(name: String): boolean = {
    names exists (=> s == name)
  }
}



Then the client code can use these APIs as higher order functions ..


// filter out all accounts belonging to debasish
accounts filter(_ belongsTo "debasish") foreach(=> println(a.getName()))



or for a functional variant of computing the sum of all non-zero accrued interest over all accounts ..


accounts.filter(_ belongsTo "debasish")
        .map(_.calculateInterest(java.math.BigDecimal.valueOf(0.25)))
        .filter(!= 0)
        .foldLeft(java.math.BigDecimal.ZERO)(_.add(_))



This style of programming, despite having chaining of function calls are very much intuitive to the reader (expression oriented programming), since it represents the way we think about the computation within our mind. I am sure your clients will love it and the best part is that, you still have your tried-and-tested Java objects doing all the heavy-lifting at the backend.

Similarly, we can write some friendly methods in the Scala class that act as builder APIs ..


class RichAccount(value: Account) {
  //..as above

  def <<(address: Address) = {
    value.addAddress(address)
    this
  }

  def <<(name: String) = {
    value.addName(name)
    this
  }
}



and which allows client code of the following form to build up the name and address list of an account ..

acc << "shubhasis"
    << new Address(13, "street_1n", "700098")
    << "ashis"


Scala Implicits for more concise API

Now that we have some of the above APIs, how can we ensure that the Scala class really serves as a seamless extension (or adaptation) of the Java class. The answer is the *implicit* feature of Scala language. Implicits offer seamless conversions between types and makes it easy to extend third party libraries and frameworks. Martin Oderskey has a nice writeup on this feature in his blog. In this example we use the same feature to provide an implicit conversion function from the Java class to the Scala class ..

implicit def enrichAccount(acc: Account): RichAccount =
    new RichAccount(acc)


This defines the implicit conversion function which the complier transparently uses to convert Account to RichAccount. So the client can now write ..


// instantiate using Java class
val myAccount = new Account("debasish", "100", new Address(12, "street_1", "700097"))



and watch (or feel) the compiler transparently converting the Java instance to a Scala instance. He can now use all the rich APIs that the Scala class has cooked up for him ..

// use the functional API of Scala on this object
myAccount.names.reverse foreach(println)


We can also use Scala implicits more effectively and more idiomatically to make our APIs smarter and extensible. Have a look at the Java API for calculation of interest in the Java class Account :

public void calculate(BigDecimal precision, Calculator c) {
  interest = c.calculate(this, precision);
}


Here Calculator is a Java interface, whose implementation will possibly be injected by a DI container like Spring or Guice. We can make this a smarter API in Scala and possibly do away with the requirement of using a DI container.


class RichAccount(value: Account) {
  //..as above

  def calculateInterest(precision: java.math.BigDecimal)
           (implicit calc: Calculator): java.math.BigDecimal = {
    value.calculate(precision, calc)
    value.getInterest()
  }
}



Here we use the nice curry syntax of Scala, so that the client code can look nice, concise and intuitive ..


val accounts = List[Account](..)
accounts filter(_ belongsTo "debasish")
    foreach(=> println(a.calculateInterest(java.math.BigDecimal.valueOf(0.25))))



Note that the Calculator parameter in calculateInterest() has been declared implicit. Hence we can optionally do away specifying it explicitly so long we have an implicit definition provided. The client code has to mention the following declaration, some place accessible to his APIs ..

implicit val calc = new DefaultCalculator


and we need no DI magic for this. It's all part of the language.

Spice up the class with some control abstractions

Finally we can use higher order functions of Scala to define some nice control abstractions for your Java objects ..


class RichAccount(value: Account) {
  //..as above

  def withAccount(accountType: AccountType)(operation: => Unit) = {
    if (!value.isOpen())
      throw new Exception("account not open")

    // other validations

    if (value.getAccountType().equals(accountType))
      operation
    }
  }
}



The method is an equivalent of the Template Method pattern of OO languages. But using functional Scala, we have been able to templatize the variable part of the algorithm as a higher order function without the additional complexity of creating one more subclass. The client code can be as intuitive as ..


a1.withAccount(AccountType.SAVINGS) {
  println(a1.calculateInterest(java.math.BigDecimal.valueOf(0.25)))
}



As I mentioned in the beginning, I think this layering approach is going to be the normal course of evolution in mainstream adoption of other JVM languages. Java is so dominant today, not without reason. Java commands the most dominant ecosystem today - the toolsets, libraries, IDEs and the much familiar curly brace syntax has contributed to the terrific growth of Java as a language. And this domination of the Java language has also led to the evolution of Java as a platform. Smalltalk may have been a great platform, but it lost out mainly because developers didn't accept Smalltalk as a mainstream programming language. Now that we have seen the evolution of so many JVM languages, it is about time we experiment with polyglot paradigms and find our way to the next big language set (yes, it need not be a single language) of application development. So long we have been using Java as the be-all and end-all language in application development. Now that we have options, we can think about using the better features of other JVM languages to offer a smarter interface to our clients.

Monday, January 14, 2008

Better Application Configuration using Spring Meta-Annotations

One of the common allegations against Spring is that it promotes XML abuse. Since 2.0, Spring has started putting in features in the DI container that provide developers options to minimize XML configuration and promote use of Java 5 annotations. Configuration parameters that are truly external and needs to change across deployments viz. driver pathname, URL base, file names etc. are definite candidates for property files or XML. While those mappings which deal with wiring of application components are better handled through a mechanism that enforces more type-safety than string literals in XML. With the recent releases of Spring, we are getting more and more options towards better configuration of the DI container.

This post talks about my experience in evolution of better configuration options with Spring 2.5 in one of the real life applications. Spring folks have now realized that programmatic configuration options through proper type-safety of the Java language has definite advantages in encouraging code modularity, unit testability and refactorability. Modularization is one of the key focal areas in application construction and integration today, and modules based on a type safe programming language yield better end result than one based on XML files.

In one of my applications, I have the following implementation of a Strategy pattern that models the interest calculation algorithm for various types of accounts. The algorithm varies depending upon some of the attributes of the account. Here is a simplified view of things ..


public interface InterestCalculation {
  BigDecimal calculate(final Account account);
}

public class NormalCheckingInterestCalculation
    implements InterestCalculation {
  //..
}

public class PriorityCheckingInterestCalculation
    implements InterestCalculation {
  //..
}

public class NormalSavingsInterestCalculation
    implements InterestCalculation {
  //..
}

public class PrioritySavingsInterestCalculation
    implements InterestCalculation {
  //..
}



One of the issues with earlier versions of Spring was that the best (or the most recommended) practices espoused writing XML code not only for declaring the beans for every concrete implementation, but also for every collaboration that it takes part in. Hence my XML code also goes on increasing in addition to the Java code. One available option to reduce the growing chunks of XML is to use autowiring, which, being implemented at a very coarse level of granularity in earlier versions of Spring seemed too magical to me to use in production code.

Spring offers options for modularizing XML configurations and implementing hierarchical application contexts. But still I think the recent thoughts towards modularization using typesafe metadata holds more promise. It is good that initiatives like Spring JavaConfig are gaining in importance. When we speak of modularization of application components, one of the very important aspects in it is composition of modules. And XML cannot be a viable option towards scalable module composition. Guice has been doing some promising stuff with modules and composition, but that is another story for another post.

Meanwhile in the Spring land ..

Spring 2.5 implements autowiring at a much finer level of granularity that allows developers to provide explicit controls on the matching closure. Using annotations like @Qualifier, I can control the selection of candidates amongst the multiple matches. However, the default usage of @Qualifier allows only specification of bean names, which once again is in the form of string literals and hence susceptible to typos, usual type-unsafety kludges and refactoring unfriendliness. The real treat, however, is the feature that enables you to use @Qualifier as a meta annotation to implement your own custom qualifiers.

Consider how the new autowiring features in Spring 2.5 improve upon the above example and allow explicit declarative configuration by using fully type-safe annotations in place of XML.

Autowiring based on Annotations

Spring 2.5 allows you to define your own annotations that can act as classifiers for matching candidates in autowiring. For the example above, the various strategies for interest calculation are based on 2 axes of variation - the account type and the customer type. With the earlier variant of using XML as the configuration metadata, these business rules were buried deep within the implementing classes only, and dependency injection was done using explicit bean names as string literals. Here is an annotation that defines the classifiers in Spring 2.5 ..


@Target({ElementType.TYPE, ElementType.FIELD})
@Retention(RetentionPolicy.RUNTIME)
@Qualifier
public @interface AccountQualifier {
  AccountType accountType() default AccountType.CHECKING;
  CustomerType customerType() default CustomerType.NORMAL;
}



Note the usage of @Qualifier as the meta-annotation to define custom domain specific annotations.

Now annotate all injected instances of the strategy interface with the annotation @AccountQualifier. The wiring will be done by Spring based on the matching qualifiers.


public class InterestCalculator {
  @Autowired
  @AccountQualifier(
      accountType=AccountType.CHECKING,
      customerType=CustomerType.NORMAL
  )
  private InterestCalculation normalChecking;

  @Autowired
  @AccountQualifier(
      accountType=AccountType.SAVINGS,
      customerType=CustomerType.PRIORITY
  )
  private InterestCalculation prioritySavings;

  //..
}



In the above snippet, the instance normalChecking will be wired with an instance of NormalCheckingInterestCalculation, while prioritySavings will get an instance of PrioritySavingsInterestCalculation. Implement fine grained autowiring with Spring 2.5 and get explicit declarative business rule based configuration as aprt of metadata free!

Now the configuration XML does not have to specify the collaborators. The size goes down, though still we need to specify each of the custom qualifiers associated with the concrete implementation classes of the strategy.


<beans>
    <context:annotation-config/>

    <bean class="org.dg.biz.NormalCheckingInterestCalculation">
        <qualifier type="AccountQualifier">
            <attribute key="accountType" value="CHECKING"/>
            <attribute key="customerType" value="NORMAL"/>
        </qualifier>
    </bean>

    <bean class="org.dg.biz.NormalSavingsInterestCalculation">
        <qualifier type="AccountQualifier">
            <attribute key="accountType" value="SAVINGS"/>
            <attribute key="customerType" value="NORMAL"/>
        </qualifier>
    </bean>

    <bean class="org.dg.biz.PriorityCheckingInterestCalculation">
        <qualifier type="AccountQualifier">
            <attribute key="accountType" value="CHECKING"/>
            <attribute key="customerType" value="PRIORITY"/>
        </qualifier>
    </bean>

    <bean class="org.dg.biz.PrioritySavingsInterestCalculation">
        <qualifier type="AccountQualifier">
            <attribute key="accountType" value="SAVINGS"/>
            <attribute key="customerType" value="PRIORITY"/>
        </qualifier>
    </bean>
    
    <bean id="interestCalculator" class="org.dg.biz.InterestCalculator" />
</beans>



Classpath AutoScan for Managed Components - Even less XML

Spring 2.5 makes it possible to do away with explicit declarations of the concrete implementation classes in the XML file by using the auto-scan feature. This allows the DI container to auto-scan the classpath and detect all wiring candidates based on user specified filters. In our case, we already have the metadata as the selection criteria - hence we can use the same annotation @AccountQualifier as the basis for autoscanning of managed components. So annotate the implementation classes with the appropriate metadata ..


@AccountQualifier(accountType=AccountType.CHECKING, customerType=CustomerType.NORMAL)
public class NormalCheckingInterestCalculation
    implements InterestCalculation {
  //..
}

@AccountQualifier(accountType=AccountType.CHECKING, customerType=CustomerType.PRIORITY)
public class PriorityCheckingInterestCalculation
    implements InterestCalculation {
  //..
}

// similarly the other classes



and include the final snippet in XML that does the last bit of trick to auto-scan the classpath to find out matching entries based on the annotations. The entire erstwhile blob of XML turns into the following snippet of a few lines ..


<context:annotation-config/>

<bean id="interestCalculator" 
      class="org.dg.biz.InterestCalculator"/>

<context:component-scan base-package="org.dg.biz" use-default-filters="false">
  <context:include-filter type="annotation" expression="org.dg.biz.AccountQualifier"/>
</context:component-scan>



That's it! We do not need to declare any of the above beans explicitly for autowiring.

One last point .. in the above example we have used @Qualifier as a meta-annotation. Hence our custom domain specific annotation @AccountQualifier class still has a Spring import of @Qualifier. In keeping with Spring's usual promise of non-intrusiveness, there is a way out of it. If you do not want a Spring dependency in your custom annotation @AccountQualifier, you can do that as well, by adding the following snippet to the above 5 lines of XML ..


<bean id="customAutowireConfigurer" 
      class="org.springframework.beans.factory.annotation.CustomAutowireConfigurer">
  <property name="customQualifierTypes">
    <set>
      <value>org.dg.biz.AccountQualifier</value>
    </set>
  </property>
</bean>






P.S. Using the same annotation for autowiring and autodetection of managed components does not work in Spring 2.5 because of a bug. Mark Fisher of SpringSource pointed this out to me and verified that it works in Spring 2.5.1. I upgraded my application and things worked fine.

Monday, January 07, 2008

Language Explorations on the JVM - An Application Developer's perspective

Sometime ago I had reported on our first experience of using Rhino scripting in a Java EE application for a large client. It was exactly what Ola Bini suggests in his post on language explorations. Some of the modules of the application needed the dynamism, were required to be hot swappable and customizable by the domain users. And the compilation cycle was getting in the way in trying to meet up these requirements through the standard server side language. We went for Rhino scripting for all such controllers using the new scripting engine available for executing Javascript within the JVM.

Since that application has been successfully deployed, we have been fiddling around with some more options towards polyglotism. This post is a brief summary of some of the languages / language bridges we explored in the process. All of what we did so far has been on the JVM as the underlying Polyglot platform - we have not yet explored anything on the .NET world.

Web controllers are areas which may need lots of dynamic nature, since they deal with user interactions, page flows, stateful storage across requests and many other control flow structures for realizing one complex use case. Spring Web Flow provides one viable option for modeling this. Another option from the scripting world is Rhino in Spring, which integrates Mozilla Rhino JavaScript interpreter with Spring Framework. The value add is to offer to the user the flexibility of a dynamic language to model the dynamic parts of the application on the Java platform, while integrating with the dependency injection principles of the Spring framework. Spring also offers nice support of plugging in managed script based controllers in multiple languages - this will surely provide more options towards evolution of polyglot programming in today's applications.

Another area where we explored the possible usage of an expressive language is the configuration of an application. Applications today mostly use XML based configurations, which feels too noisy for human consumption. SISC offers a lightweight Scheme scripting engine atop the JVM and comes bundled with a small footprint of around 230 KB. I had blogged before on using Scheme as an executable XML :
In SISC bridging is accomplished by a Java API for executing Scheme code and evaluating Scheme expressions, and a module that provides Scheme-level access to Java objects and implementation of Java interfaces in Scheme.

Talking about what Ola Bini calls the "stable layer", I fully agree that static type safety helps here, since the entire application infrastructure will be built upon this layer. Till today Java is my #1 choice as the language and Spring is my only choice as the framework for this layer. I have talked on this a number of times before, but I guess it is worth repeating that I love the non-intrusiveness of Spring as far as declarative programming on the JVM is concerned. As it stands now, I will not forego Spring if I am developing on the JVM platform.

It will be really interesting to see how Scala shapes up its future as a potential candidate for this layer. Scala is a feature rich language with an advanced type system, nice syntax, less verbosity and more elegance than Java. Where Scala lacks are tooling, documentation and industry patronage, all of which can improve with more and more users joining the community.

In the domain layer, most applications rely on pure Java to model business rules. As Ola has mentioned, this layer is a strong candidate for DSL based implementation. Irrespective of what language(s) you use to implement your DSL, the business application rules should always be based on the DSL only. My feeling is that in today's scenario, Java is not really an ideal language to design a DSL. Hence we tend to find almost all applications implementing the domain layer at lower levels of abstraction. This makes the domain layer of today more verbose and less maintainable.

Powerful and expressive languages with conciseness of syntax are better fit for designing DSLs. While JRuby and Scala make suitable candidates for designing DSLs for the domain layer, I think the static typing of Scala makes it a better fit here. I may be biased, but when I am thinking of reusable API design to be used by big teams, somehow static typing (possibly done better than Java) makes me more comfortable. However, considering the state of enterprise software development today, there is a significant entry barrier for average programmers to both Scala and JRuby. Idiomatic Scala or Ruby is primarily based on functional paradigms, something which is still not intuitive to a Java programmer today. With most of today's new generation languages embracing FP, this may be the single most deciding factor that will determine the amount of acceptability that polyglot programming will find within the behemoth called enterprise software development. But there is no doubt that a well designed DSL using languages like Scala or JRuby will find tomorrow's domain model at a much higher level of abstraction than what it is today.

Tuesday, January 01, 2008

Hello 2008! Will Java Strike Back ?

Wish I could start the new year blogging on a more positive note!

Two of these recent articles by InfoWorld don't look inspiring if you are a Java programmer. Particularly if you are one of the Java only programmers constructing enterprise software for one of the BigCos, these lines do not sound like music.

Notwithstanding the fact that Java is syntactically verbose, lacks the elegance of Ruby or conciseness of Haskell, Java has so far delivered the promises in developing performant enterprise applications, where you look for a statically typed, fast, stable language with a strong community contributing to the zillions of open source libraries and frameworks.

But since the release of Java 6, the issue of language design in Java clearly lacks strong leadership - whiteboards and community voting help in getting the watermark, but you need a strong leader to dictate the best features into the future versions of the language.

Developers who seek joy in programming have been moving away from Java to the rails of Ruby, Erlang and Haskell. The bottomline is that we need to make Java more productive, and the only way to do so is to add features that reduce the verbosity of Java code and encourage developers to design more powerful abstractions using the language. On the contrary, the recent trend in the Java development ecosystem has been towards adding frameworks in the development stack to make the round peg fit the square hole.

Some of the experts of the Java language suggest moving to other languages on the JVM (ala Scala, JRuby) if you like to have more powerful control abstractions. In that case what will happen to the tonnes of Java code ruling today's enterprise ?

Will Java be the new COBOL ?