There is an important distinction to be made between easy and simple code. There are times where code can be both easy and simple, something like:
val total = numbers.sum
is both easier and simpler than:
var total = 0
for number <- numbers do
total += number
However, these can also be opposing forces as well. As an example, mutability is often easy to do, like if you want a PaymentProcessor
that modifies a global variable tracking transaction IDs. This is of course very easy to do, but it leads to tightly coupled, imperative, complex (not simple) code. To understand how transaction IDs are tracked, you have to have a complete understanding of your entire app, rather than having it contained to an isolated part of your app.
OO (OOP; Object-Oriented Programming) and FP (Functional Programming) programmers alike want to avoid code that is easy at the expense of simplicity. I’ve found that there are massive differences in the way these paradigms go about achieving this goal. The OO style is to rely on patterns and principles, while FP relies on the type system and compiler.
Restriction and local reasoning Link to heading
The core to both of these approaches is ultimately restriction.
The dependency inversion principle of OO land states that instead of the “standard” dependency path of high level depending on low level, have both depend on abstractions, and said abstractions should in turn not depend on the details of the high/low level code. This restricts the free use of code, and instead demands the creation of new abstractions that at first don’t seem “necessary”.
Effect systems of FP land dictate that code should not, or in some languages cannot, have side effects. They dictate that effects should be values in and of themselves, not things that happen “on the side”. This restricts the ability of functions to execute any effect they want without disclosing it to the type system or broader codebase, and at first this might also seem unnecessary or cumbersome.
The key thing that these, and similar, restrictions improve is Locality of Reasoning. The ability to look at some code and say definitively that some set of invariants are going to hold true when that code runs.
When you can establish more invariants without having to investigate implementation details, you gain the ability to reason about code without leaving your current context. Your local reasoning improves.
Paradigm Divergence Link to heading
Both OO and FP programmers tend to look at the other approach as insufficient. I’ll let my biases be known; I fall definitively into the FP camp. However I’ll try my best to paint a fair picture of what I think the OO position is (and I have in the past been a staunch advocate of OO programming).
From the OO programmer’s perspective, the lack of rigorous patterns is seen as a massive fault. Obviously we shouldn’t have to reinvent the wheel, and after spending years using patterns it can feel as though the only way to avoid doing so is through the use of said patterns. This is a big part of why I started a series on Obsolete Design Patterns, to go over how specific patterns are addressed in FP, and how a lot of these patterns actually arise because of gaps in expressiveness in older languages.
From the FP programmer’s perspective, the lack of sufficiently advanced type systems (to allow for greater safety and broader abstractions) is seen as a similarly massive fault. The inability to encode certain abstractions (even something as simple and universal as a general map
), leaves a large enough gap in expressiveness that you end up missing out on a bunch of more advanced combinators, which in turn means you do end up having to “reinvent the wheel” to achieve what these combinators do.
As far as I see it, what FP abstractions allow is for more complexity to be offloaded onto libraries, because the languages are definitively more high level. You can realistically see code that accumulates a List[IO[Unit]]
, and then either parSequence
or just sequence
the effects based on if you want those effects to be run in parallel or not. In languages where OO is the norm, effects are not pure, nor are they tracked by the type system. You can emulate the absolute basics of IO
with functions, like in Java you might have List<Supplier<Void>>
, but you still lack many of the advantages of effect systems, namely automatic resource safety, composability through combinators, thread safe code without the need for explicit synchronization, robust error handling, etc.
Here’s a showcase of the code I was alluding to in the last paragraph, firstly how the Scala would look:
def saveSeq(users: List[User]) =
users.traverse_(_.save)
def savePar(users: List[User]) =
users.parTraverse_(_.save)
and how the Java 22 (with preview features) would look:
void saveSeq(List<User> users) {
users.forEach(User::save);
}
void savePar(List<User> users) {
try (var scope = new ShutdownOnFailure()) {
for (var user : users)
scope.fork(() -> { user.save(); return null; });
scope.join();
}
}
Java’s new structured concurrency makes this type of thing astronomically better than it has been in the past, but it’s still not a match for how good it’s been in Scala for years.
In the former function, saveSeq
, the users are saved sequentially. In the latter function, savePar
, the users are saved in parallel. Choosing one of these two execution models is a high level concern, and as such we have high level combinators for doing so. You might be tempted to try to write these combinators yourself, and if you reify them enough you can, but to actually implement traverse
or parTraverse
fully requires HKTs, which is something Java (and most non-FP languages) lack. Specifically in this code:
trait Traverse[F[_]] extends Functor[F]
with Foldable[F] with UnorderedTraverse[F]:
def traverse[G[_]: Applicative, A, B](fa: F[A])(f: A => G[B]): G[F[B]]
There is no ability to define F[_]
or G[_]
(nor that G
implements the typeclass/context bound Applicative
) in most OO langauges. You’d instead need to reimplement traverse
/parTraverse
everytime you wanted to use them on a new F
/G
combination.
An OO programmer might look at the saveSeq
and savePar
examples and say something to the tune of “great, but save
looks like it’s going to do some kind db/network request to actually save the user somewhere. The User shouldn’t depend directly on these resources”. This is almost certainly true about the Java, as what else would it be doing if it’s not returning anything? However, in the Scala the return type tells us a lot of information. I omitted it because there are actually at least a few reasonable return types this could have that would mean different things. Of course, your editor/IDE would be able to tell you the return type even if it wasn’t spelled out in the source code explicity. Let’s first look at if they returned ConnectionIO[Unit]
:
def saveSeq(users: List[User]): ConnectionIO[Unit] =
users.traverse_(_.save)
def savePar(users: List[User]): ConnectionIO[Unit] =
users.parTraverse_(_.save)
this would actually mean that the save
implementation has no explicit dependency on the database. Instead it’s an sql query that was “ran”. Like IO
these are actually just pure descriptions and not side-effectful, so nothing was actually run yet. Whatever code calls saveSeq
/savePar
would eventually handle explicitly running these ConnectionIO
s against a db. This would make it trivial to instead test these queries against an in-memory DB, rather than a real one. Alternatively we might have this return type:
def saveSeq(users: List[User]): IO[Unit] =
users.traverse_(_.save)
def savePar(users: List[User]): IO[Unit] =
users.parTraverse_(_.save)
This would instead not be a query, but an action that could be directly composed with other IO
s and later run, so it would likely encapsulate whatever db/network logic was needed to make that happen.
In another situation, we might have these return types:
def saveSeq(users: List[User]): ReaderT[IO, DbRepo, Unit] =
users.traverse_(_.save)
def savePar(users: List[User]): ReaderT[IO, DbRepo, Unit] =
users.parTraverse_(_.save)
save
in this case could be just a function that reads a DbRepo
and then passes itself into some corresponding save
function on that abstraction.
The different return types above give us a lot of information about what is able to be done and as a result what is probably being done by save
. We have far greater locality of reasoning here because of how advanced the type system is.
So are patterns/principles useless? Link to heading
I don’t think so, I think that statement is too broad to be meaningful really. If we’re talking about specifically OO design principles from decades ago, then it depends on the principle/pattern. What I’ve found at least from the GoF patterns is that most behavioral/structural patterns are obsoleted by more expressive languages, and most creational patterns are specifically about creation of OO objects so are largely irrelevant in pure FP.
If you’re interested in the alternatives to a specific pattern, check out my other articles.
I think abstract principles more commonly still have value.
- SRP: while historically used to analyze class responsibilities, can just as easily be used for function responsibilities.
- DIP: while it can still be good to make abstractions similar to traditional OO abstractions (like a record of functions instead of an interface), sometimes the abstractions are also things like higher kinded parametric polymorphism. As in having the ability to abstract over
F[_]
. - ISP: I think this just applies to FP just as much as OO
- high cohesion/low coupling: this can apply to functions just as easily as objects.
- polymorphism: in the context of GRASP this was referring to primarily subtype/inclusion polymorphism, but adhoc and parametric polymorphism are used all the time in FP.
Is OO dying? Link to heading
Languages are learning from each other and the paradigms are getting closer and closer, so maybe the kind of OO that was done in the 90s is dying, but the same could be said for FP from the 90s.
Even something as unabashedly OO as Java has been implementing a ton of FP features in the last 2 decades. Parametric polymorphipsm (generics), higher order functions, functions as values (not first class but still), records, pattern matching, etc. The same is true for most other mainstream languages too, C#, Rust, Swift, Kotlin, Ruby, Python, you name it.
Conversely, Haskell, Idris, and other ML family languages have been taking from OO languages. Record syntax, overloaded record fields, sometimes even function overloads without typeclasses (like in Idris) are much more common than they were in previous decades. It used to be commonplace to have code like userName user
to retrieve a user’s name, now user.name
is something you’ll actually see in Haskell code, which is a bigger feat than it seems for a couple of reasons
.
already had meaning in Haskell, and still does, as function composition- record definitions automatically imply functions for the fields (so duplicate naming was an issue)
These roadbumps were resolved through extensions.
I’d say lenses are also heavily inspired by OO “nested mutation”. While lenses are actually vastly superior IMO (and implemented at a library level), I’m not sure the syntax would’ve been designed in the same way without OO languages. Namely code like:
updateCity person = person & address . city .~ "Grayville"
this isn’t mutable, it’s just returning a new person with the city changed
also, an eta-reduction here is totally acceptable and even encouraged:
updateCity = address . city .~ "Grayville"
though this would probably make it less familiar to OO programmers.
I do think overall, OO languages are taking more from FP languages than the inverse. FP languages are usually at the bleeding edge of language design. I think 15-30 years ago that was higher order functions, higher kinded types, pattern matching, and parametric polymorphism. Most of those have come or are coming in some similar form to OO languages. Now it’s dependent/refinement types, linear types, algebraic effects, and more. Idris and Unison are cool languages in these spaces (the former for dependent types and the latter for algebraic effects).
FP “in the small” vs “in the large” Link to heading
FP “in the small” is sort of non-negotiable now. Having to write something like:
List<Integer> doubledNumbersOver5(List<Integer> nums) {
List<Integer> newNums = new ArrayList<>();
for (int i = 0; i < nums.size(); i++) {
Integer doubledNum = nums.get(i) * 2;
if (doubledNum > 5)
newNums.add(doubledNum);
}
return newNums;
}
is seen as much worse than:
def doubledNumbersOver5(nums: List[Int]): List[Int] =
nums.map(_ * 2).filter(_ > 5)
so the bigger question is “FP in the large”, and it’s inherently harder to answer larger scale questions because we can’t generate small code examples that so clearly elucidate the advantages of FP. However, effect systems are incredible and central to FP “in the large”. This is a great talk by Daniel Spiewak that is absolutely worth the watch on the case for effect systems. While it’s in the context of cats effect in Scala, cats/typelevel in general took a lot of inspiration from Haskell, and is far more similar to Haskell than something you’d find in Java or OO languages.
Alright, you have me convinced, you’re awesome. Link to heading
Thanks.