"This may be a mathematician's bias, but Category Theory is somewhat underwhelming because you can't really do anything with it. It's actually too abstract--you can model basically every branch of mathematics with Categories, but you can't actually prove very much about them from this perspective. All you get is a different vocabulary to talk about results that you already knew. It's also less approachable because its objects of study are other branches of mathematics, whereas most other branches have number, vectors, and functions as standard objects."

Now let me digress a bit towards personal experience. I don't have a strong math background and have only started learning category theory. I don't get most of the mathy stuff that category theorists talk about that I cannot relate to my programming world. But the subset that Pierce talks about and which I can relate to when I write code gives me an alternate perspective to think about functional abstractions. And, as I mentioned in my last post, category theory focuses more on the morphisms than on the objects. When I am programming in a functional language, this has a strong parallel towards emphasizing how abstractions compose. Specifically point free style composition has its inspiration from category theory and the way it focuses on manipulating arrows across objects.

# Focusing on Generic Abstractions

Category theory gives us Functors, Applicatives, Monads, Semigroups, Monoids and a host of other abstractions which we use extensively in our everyday programming. Or at least I try to. These have taught me to think in terms of generic abstractions. I will give you a practical example from my personal experience that served as a great eye opener towards appreciating the value of generic abstractions ..I program mostly in Scala and some times in Haskell. And one thing I need to do frequently (and it's not at all something unique to me) is apply a function

`f`

to a collection of elements (say of type `Foo`

) with the caveat that the function may sometimes return no result. In Scala the obvious way to model this is for the function to return an `Option`

type. Just for the record, `Option`

is the equivalent of `Maybe`

in Haskell and can be one of the two concrete types, `Some`

(indicating the presence of a value) and `None`

(indicates no value). So, applying `f: Foo => Option[Bar]`

over the elements of a `List[Foo]`

, gives me `List[Option[Bar]]`

. My requirement was to convert the result into an `Option[List[Bar]]`

, which would have a value only if all the `Foo`

s in the `List`

gave me `Some[Bar]`

, and `None`

otherwise. Initially I implemented this function as a specialized utility for `Option`

data type only. But then I realized that this function could very well be applicable for many other data types like `Either`

, `List`

etc. And this aha! moment came courtsey of Scalaz which has a function `sequence`

that does the exact same thing and works for all `Traversible`

data structures (not just `List`

s) and containing anything that's an instance of an Applicative Functor. The Scala standard library doesn't have generic abstractions like `Monad`

, `Monoid`

or `Semigroup`

and I constantly find reinventing them all the time within real world domain models that I program. No wonder I have Scalaz as a default dependency in almost all of my Scala projects these days. So much for generic abstractions .. Now the question is do I need to know Category Theory to learn the generic abstractions like Applicative Functors or Monads ? Possibly no, but that's because some of the benevolent souls in the programming community have translated all those concepts and abstractions to a language that's more approachable to a programmer like me. At the same time knowing CT certainly gives you a broader perspective. Like the above quote says, you can't pinpoint and say that I am using this from my category theory knowledge. But you can always get a categorical perspective and a common vocabulary that links all of them to the world of mathematical composition.

# Natural Transformation, Polymorphism and Free Theorems

Consider yet another aha! moment that I had some time back when I was trying to grok Natural Transformations, which are really morphisms between Functors. Suppose I have 2 functors*F*and

*G*. Then a natural transformation η

*: F → G*is a map that takes an object (in Haskell, a type, say,

*a*) and returns a morphism from

*F(a)*to

*G(a)*, i.e.

η

*:: forall a. F a → G a*

Additionally, a natural transformation also needs to satisfy the following constraint ..

For any

*f :: A → B*, we must have

*fmap*

_{G}f . η = η . fmap_{F}fConsider an example from Haskell, the function

`reverse`

on lists, which is defined as `reverse :: [a] -> [a]`

, which we can more specifically state as `reverse :: forall a. [a] -> [a]`

..Comparing

`reverse`

with our earlier definition of η, we have *F ≡ []*and

*G ≡ []*. And since for lists,

`fmap`

= `map`

, we have the other criterion for natural transformation as *map f . reverse = reverse . map f*. And this is the free theorem for reverse!

Not only this, we can get more insight from the application of natural transformation. Note the

*forall*annotation above. It literally means that η (or

`reverse`

) works for *any*type

`a`

. That is, the function is not allowed to peek inside the arguments and its behavior does not depend on the *exact*type of

`a`

. We can freely substitute one type for another and yet have the same behavior of the function. Hence a natural transformation is all about polymorphic functions.