Skip to end of metadata
Go to start of metadata

Using RESTful web services to implement a distributed system is an idea that has been known for many years now — Roy Fielding's thesis on the matter was published in 2000 — but these last years has gradually gained popularity and is now almost indespensable. I say this because ASP.NET Web API has entirely replaced WCF, and Microsoft have long played the role of broom wagon in the internet technology race.

As with any new trend, a question that often comes up is "Are we doing it right?".

In response, Leonard Richardson has developed a scale for measuring the maturity, or right(eous)ness, of a web service. Martin Fowler has written a good article explaining the different levels of this scale. According to this measure, the final level of maturity is attained when the web service in question serves resource representations as hypermedia, and clients leverage this to navigate application state. This is the HATEOAS constraint.

One benefit of complying with HATEOAS, is that a client needs very little out-of-band information in order to interact with the service. It needs the URIs for the entry points of the service, and some knowledge of the semantics of the link relations it references. This comes with a perk: if clients really adhere to this, that is they don't embed URIs that aren't entry points, the service can change all other URIs at will without breaking clients.

To take advantage of these benefits, the number of entry points should be kept low. How low? I don't know (big grin). So far, I haven't seen a book or article on the subject that gives any guidelines for this matter.

In any case, HATEOAS helps us with getting RESTful web services right, and provides some rules for how clients consume them. However there are still a few grey areas.

The Problem

Web services can be consumed by different types of clients, from a desktop application for a human end-user to an automated system hosted somewhere on the Internet. In particular, the front-end of an application may serve as a façade to several back-end subsystems implemented as microservices. If this front-end is web based, it needs to define its own namespace of URIs to which it responds. In order to attain the lofty heights of HATEOAS, how should one organise this namespace?

For example, consider an application that manages your contact information. The back-end could be just one web service with the following API:

GET/contactsGet a list of all contacts
POST/contactsCreate a new contact
GET/contacts/{id}The details of a contact
PUT/contacts/{id}Edit the details of a contact
DELETE/contacts/{id}Delete a contact
GET/contacts/{id}/addressesGet a list of addresses for a contact
POST/contacts/{id}/addressesAdd a new address to a contact
GET/contacts/{id}/addresses/{address-type}Get a specific address for a contact

However, not all of these URLs are to be exposed as entry points, if the full agility of HATEOAS is to be accomplished. In this example, the /contacts URI could be the single entry point.

So what of the front-end that uses this web service?

I have searched far and wide for various answers to this question. Here are a few of them:

Many of the answers I found mapped front-end URIs such as http://front-end/contacts/Bob/addresses/home to back-end URIs such as http://back-end/contacts/Bob/addresses/home, with the back-end URI template embedded in the source code of the front-end. This violates the fact that these URIs are not entry points.

So how do we do this right?

After measuring these answers up in terms of Richardson Maturity, I have come to the conclusion that there are three answers that comply with HATEOAS, and the difference between each of them can be reduced to choosing the multiplicity of entry points for the façade and for the subsystems.

 Façade entry pointsSubsystem entry points
Solution 1singlesingle
Solution 2multiplesingle
Solution 3multiplemultiple

The next sections of this article describe each of these solutions in turn, and provide insights into the tradeoffs that they carry.

Solution 1: single entry points

In this solution, the façade and the subsystems behind it each have a single entry point. Other resources that a subsystem exposes must be accessed by traversing links using only knowledge of link relationship names and the semantics they carry. The façade also exhibits this trait. For example, if we're talking about a user facing website, it would be a single page application with no deep links.

We can see the obvious tradeoff here: in the name of HATEOAS, we sacrifice the ability to bookmark a useful subpage for later. For instance, if I'm browsing an e-commerce website, and I might want to bookmark a few interesting items, or I might be in the middle of checking out, but I'm not sure so I'll bookmark the URI of the current step so I can come back later and complete the process. Yet none of these scenarios is permissible when there is only one entry point.

This tradeoff can be mitigated by adding useful links to the representation of the entry point. In fact many websites do this. Amazon allows users to add articles to a personal wish list and then access that list from the home page.

Also, this setup mirrors the behavior of most graphical desktop applications. Users just double-click on the application's icon and it opens in a standard state. It is unusual to be able to launch a wizard in the application, fill out some of the pages, then create a link on the desktop that allows you to come back at some other time and finish the remaining steps.

Solution 2: multiple façade entry points, single subsystem entry point

This solution allows users to bookmark any façade URIs, but the façade itself only uses one entry point to access all subsystem resources.

How would this work in the case of a contact manager? For example, if the user bookmarked the URI http://front-end/contacts/Bob/addresses/home and accessed it later, the façade might make the following requests to the back-end:


The Traverson Javascript library simplifies these kinds of traversals.

It has inspired an API provided as part of the Spring HATEOAS library.

This solution makes the façade very chatty, and therefore trades bandwidth for bookmarkable URIs and fully mature RESTful back-end services.

Solution 3: multiple entry points

But in the end, who cares? HATEOAS is for purists and hypothetical utopian clients. Let's just go back to embedding URIs in clients and make everything easier. (thumbs up)(big grin)

This is not what my third solution is about!

Rather it consists in exposing some deeper back-end URIs as valid entry points, but not all of them. With HATEOAS, we're allowed to have more than one entry point. We just need to keep it to a few. How few? That's up to you. (wink)

To help with your decision, let's look at another benefit of relying on link relations: their presence or absence carries meaning. It hints that the client can or cannot make that transition. This could be determined by several factors: the state of the resource, the user's permissions, to name a couple. These links can be quite volatile, and thus should not be bookmarked. However, you can probably identify a subset of your URIs that are part of the core domain and are stable. In the rare case that they do change, just be cool and put redirects in place.

In our contacts example, the template http://back-end/contacts/{id} could be a useful entry point, but http://back-end/contacts/{id}/addresses/{address-type} would not necessarily.

This solution sacrifices some abstraction and agility for less bandwidth usage and bookmarkable URIs, all the while preserving full maturity.

Summary of tradeoffs


Abstraction & agility

Bandwidth usageBookmarkingMaturity
Solution 1(plus)(plus)(minus)(thumbs up)(big grin)
Solution 2(plus)(minus)(plus)(thumbs up)(big grin)
Solution 3(minus)(plus)(plus)(thumbs up)(big grin)

A couple anti-patterns


I've seen some articles or comments alluding to a solution in which the façade caches URIs for later use. However, this runs the risk of treating every back-end URI as an entry point and would expose the client to the volatility of certain state transitions.


I've also come across a similar solution where the façade embeds back-end URIs as query parameters of its own URIs. This is an anti-pattern for a few reasons:

  • the resulting façade URIs do not comply with the resource constraint, level 2 of the Richardson Maturity Model;
  • these URIs break encapsulation, which circumvents the very purpose of a façade;
  • this scheme risks circumventing entry points.

Final thoughts

This trilemma reflects a conclusion I have come to after digesting the current state of art. However, I wouldn't be surprised if other solutions are found in the future.

Also, I currently feel that each solution has equal merit. This may change as more experience is gained.

Finally, the examples given only integrate with one back-end service, but the solutions apply to façades that integrate multiple services. In fact, one could quite possibly chose different solutions for communicating with each of them.

  • No labels