Hexagonal Architecture

Knowing Your Onions

Hexagonal Architecture

At OB Collective one of our founding principles is that in various shapes and forms we always endeavour to write loosely coupled code for our clients. This not only benefits our own client teams in the short term in that we can deliver more with less; it also continues to benefit our clients, long after we’ve left, in that they can quickly and easily, test, maintain and extend the applications we deliver and hand over.

In achieving the above, no problem or indeed solution is the same; we don’t only have a hammer and not everything looks like a nail. That said, one architectural approach we have favoured on more than one occasion in recent times is hexagonal architecture. You may have heard this also described as ports and adapters architecture. First introduced by the American computer scientist Alastair Cockburn in 2005, the core principle of this architectural approach is to separate business logic from the external dependencies of the application. By external dependencies we mean things like databases, user interfaces, message buses, etc. The upshot of this means that we design systems that are flexible, maintainable, and easy to test. The inspiration for the term ‘hexagonal’ was apparently due to real-world hexagonal structures that exude multiple points of entry.

Traditionally this approach divides an application into three layers, namely the application layer, infrastructure layer and interface layer. The responsibility of the application layer is to contain the core business logic and co-ordinate other layers. The infrastructure layer concerns itself with the external dependencies of the application, for example database repositories. The interface layer is the thing that can be thought of as the edges of the hexagon. This layer is responsible for exposing the entry points of the application, things such as API endpoints, cli commands and/or web pages. The interface layer’s job is to receive requests, hand them off to the application layer and present any responses off the back of this.

As we’ve turned the handle on several projects utilising hexagonal architecture, we have refined how we use the architecture such that our layers are more nuanced than the traditional ones discussed above. We do however stick strictly to the core principle of separating the business logic from everything else. We also leverage technology to enforce this overriding principle. An example of this would be in the JVM space where we represent our layers as separate modules, each with their own gradle build file. This approach allows us to be explicit about the relationships between the layers in an application and the external dependencies they leverage. By adopting practices such as pair programming and peer reviews, our client teams collaboratively act as gatekeepers for keeping the domain pure and ensuring it does not depend on any of the other layers in our application but they instead depend on it.

The way we tend to deviate from traditional hexagonal architecture veers towards a variant known as onion architecture. This architectural approach was proposed a few years after Cockburn’s hexagonal architecture publication in 2008 by Jeffrey Palermo. This pattern, although similar to traditional hexagonal architecture decomposes the application further (more layers) using inversion of control. The reasons for this deviation are that onion architecture can be considered to do a better job of separating concerns and the fact that dependencies flow inward (as evidenced by the diagram below, each layer only knows about the concentric layer below it) means things should be easier to test in complete isolation.

Let’s look at the layers in the diagram above more closely to understand the architecture better.

Domain

At the core of the onion lives the domain. This layer represents an all-encompassing container for all the business logic your application contains. This layer should not depend on any of the other layers. Isolating the domain in this fashion has two obvious benefits. Firstly, we can write business code, the core logic of our application, completely independently of any infrastructure, persistence, or presentation concerns. This allows us to delay decisions around these other concerns until we know enough about our wider requirements to make qualified and confident decisions about these external dependencies and technologies. The second obvious benefit is that we can test the domain in complete isolation, independent of the things that plug into it. This is particularly useful when we make significant changes elsewhere in our application (at another outer layer of the onion), such as moving to a different type of database solution.

Application

The application layer which sometimes goes by the pseudonym the service layer can be thought of as the glue that bonds the domain and the outer layers that handle inputs and outputs. This layer is responsible for orchestration, it will leverage domain functions and pass the results of these to the relevant external dependencies, such as a persistence layer to reflect the result of this domain function in a database record. The way I like to think about this layer and it’s responsibility is that it’s job is to represent the use cases of our system. For example, for a banking application it would be responsible for the withdraw use case and therein it may orchestrate the calling of several domain functions to achieve this. Thereafter it will then leverage the persistence layer to persist the updated state of the system off the back of this domain logic being executed.

Persistence

The role of the persistence layer is pretty much what it says on the tin, to persist the state of our system. Using the banking application as an example again this is the layer where the repositories will live that will persist the state of an updated customer account balance, etc. Logically, it’s at a peer level with the infrastructure and presentation layers in terms of which layer of the onion it resides. However, in practice it’s often implemented as a layer in its own right for the means of separating concerns more granularly.

Infrastructure

In many implementations of onion architecture persistence concerns live in the infrastructure layer also and this is absolutely fine. It really depends on the project scale and complexity as to what approach I tend to favour. If I only have one or two infrastructure concerns, say a database and a message bus I tend to house them both in a single infrastructure layer and separate at a package level. If the application is more complex and we have several databases, a message bus and third-party API’s that we leverage I’ll tend to house persistence concerns in a dedicated persistence layer and let the other things live in the infrastructure layer.

Presentation

Also residing in the outer most layer of the onion the role of the presentation layer (sometimes called the UI or API layer) is to handle requests from some client(s), say a mobile device and respond back. This can be thought of as the main entry point for most applications. This layer will depend on the application layer to indirectly execute domain functions and build responses clients can understand off the back of the results the application layer provides it.

Wiring everything together, all the outer layers of the above onion architecture plug-in to each other in a loosely coupled way by leveraging interfaces. The general rule of thumb is that any outer layer of the onion only needs to and should know about the layer below it and via the contract it provides, it need not know about any concretions of that layer.

Overall, both hexagonal and onion architecture are good choices in building applications that are easy to test, extend and where concerns are nicely separated. There are also drawbacks that should be considered in both approaches however. For anyone new to the concepts there is a complexity in the learning curve required, it is tempting to over-engineer and add more layers of abstraction than necessary which can make the application code harder to understand and without good choices in terms of tech and tooling dependency management can become an overhead as the application grows.