Thursday, November 27, 2014


A Whiskey-Soaked Surprise

A couple of Fridays ago I was hanging out with Drewz, the PM for Cortex, EP's Hypermedia API engine, and we were discussing the topic of REST as we often do over scotch. Roy Fielding's dissertation came up and we were both surprised to learn that I have never read it. I mean, I've read bits of Chapter 5, but never cover-to-cover. We both thought this is a wrong that must be righted so I downloaded it to my tablet and started reading that weekend.

Book Review

Now, Ph.D. dissertations are not generally what I like to read, but Roy's writing engages the reader and doesn't dally much. Perhaps it was the Monty Python reference that started it off. I learned quite a bit, and I finally understand Mike Amundsen now when he says "You keep using that word 'REST'. I do not think it means what you think it means." I, along with many others, have conflated the term "REST" with web APIs.

REST is not an API style, but rather the Architectural style of the web. REST a collection of Architectural constraints that apply to all content and interactions on the internet, not just APIs. These constraints are immediately familiar to web designers (or should be) but are rather foreign to API designers. Having cut my teeth on the internet in the 1990's with dynamic web pages via Perl and /cgi-bin, I always intuitively understood these constraints and strove to live within them in the APIs I develop. My never-ending surprise is how non-obvious these constraints are to server-side developers who haven't written a client.

Backfilling Reality

REST is Fielding's definition of how the internet works, and should continue to work. As an author of the HTTP 1.1 specification he drove the standardization of what was a highly-chaotic mix of competing interests that were trying to establish the future of the tremendously-popular phenomenon of "logging on" and checking your email. At that time many people used AOL, or Compuserve, and thought that was the internet. You had to open a special app to actually "browse" the web itself. At that time the internet was still small enough that Yahoo could categorize every web page by hand.

Through the efforts of the W3C, IETF and various other organizations, the WWW became the internet, and dial-up services ended. One big reason for this was that the experience on the WWW was way better than what one experienced on dial-up services. This superior experience can be attributed to REST—self-contained hyperlinked resource representations that can be cached across the internet, and interactions between them that use a simple, uniform interface.

In defining REST, Fielding lays out four constraints:
  • Resource Identification. The unique identifier of a resource. Basically the URL, but with the notion that it should be semantically consistent over time. For instance, the URL points to the "current" blog post. The current post will change over time, and that's OK. However, the URL points to a specific blog post that should be the same over time. The author of the resource makes this choice and determines the stable URI for this resource. 
  • Resource Representations. A resource can have more than one type of representation. The client and server negotiate what kind of representation (media type) to use. Don't understand HTML? How about XML? How about French HTML? The negotiation of the representation format is governed by a set of rules that try to gracefully degrade to something consumable, but without any back-and-forth between the client and server.
  • Self-Describing Messages. A representation needs no further services once created. You may need special software to render it, like a PDF reader or an HTML renderer, but the content itself is complete.
  • HATEOAS. Representations contain links to related resources. These links are part of the content itself, not a secondary delivery from a link server somewhere. Back In The Day hypermedia systems had a separate link server that you called to find out what links existed for a document. REST representations have them inline, in-context, and self-defining. This means you make up the link yourself without having to create the linked content first.


Hypermedia as the engine of application state is a one-liner that comes out in the context of describing the stateless nature of REST. Statelessness is an aspect of self-describing messages, where everything needed to understand a request is in the message itself.

Not surprisingly, Roy clearly despises Cookies: Cookies
An example of where an inappropriate extension has been made to the protocol to support features that contradict the desired properties of the generic interface is the introduction of site-wide state information in the form of HTTP cookies [73]. Cookie interaction fails to match REST’s model of application state, often resulting in confusion for the typical browser application.

My interpretation of Statelessness is that it really means stateful in one place, the system of record. Changes to state are requested by the client via messages paired with state change HTTP verbs, and the server responds with the status result of the change. Current state is stored on the client, but with a set of caching rules to ensure that state does not drift into staleness beyond what the system of record will tolerate.


I haven't counted, but I suspect that Roy uses words like "efficient", "performance", and "user-perceived" on every page of his dissertation. The key to REST performance is Cacheability. One must think about how representations can be cached and refreshed over time. I cannot think of any REST API frameworks that make this at all easy. I also think this is one of the biggest missing pieces of REST APIs today.

Last year, I would have said that links in representations were the biggest missing piece, but I have been happy to see the emergence of Hypermedia (née HATEOAS) APIs. Now, cacheability is the biggest missing piece. API developers tend not to think about cacheability because API requests are considered RPC calls; they are not.

An API request is really a representation vending event. The representation should have thoughtfully-considered caching semantics that can yield remarkable performance.

Cacheability is hard because one has to consider every resource and make several decisions:
  • Is this resource shared?
  • Is this resource static?
  • If dynamic, how often does it change?
  • How quickly can a state change be determined?
I am working on this topic right now at my current job and I will blog about it in the new year.

All This and More!

Roy's dissertation covers a lot of ground, far more than I can sufficiently review in a blog post. Some interesting concepts that I am still thinking about are the evaporation of rationale in a realized architecture; how cacheability can lead to Shared Repositories, and how some information, like user identifiers and locale, should never be in the URL.

#ReadFielding For Yourself

I'd like to encourage the reader to take up Fielding's dissertation and read through it with enough persistence to allow your view of REST APIs to shift towards the happy place of the real REST. Share with others and tweet/post/whatever with the #ReadFielding hashtag. I am really interested to see what you learn that I missed.

Wednesday, July 23, 2014

REST Just Wants to be Normal

In my last post I talked about how developing hypermedia APIs has a lot in common with developing SQL databases. The key commonality is database schema normalization, or DRYness.

Like REST's Richardson Maturity Model, normalization has four levels (or 'forms') of achievement. Each form builds on the previous form until the database concisely models a reality. Normalization is thought of as an iterative process, where the data architect revisits each table and considers its irreducibility. If the table contains merged concepts, then that table is split into more related tables. As a result, a normalized database consists of many small tables and many more relationships between them.

In the same way, a good REST API consists of many small, irreducible resources that are linked together. One can follow the same iterative process of examining a resource, deciding if any data should live on its own, and then split it.

Why is normalization important? A normalized database can live a long time, possibly longer than the people who created it. It can grow without modifying previous tables. It is 'normal' in the sense that it accurately presents the normal view of the real domain it models, and that reality is not subject to much modification over time. Additions, yes, but redefinition, not so much.

While creating Elastic Path's Hypermedia API we developed two tests for normalizing a REST API. They are thought experiments that require a clear understanding of the reality of what is being modeled rather than the business goals of the API itself or any concerns with performance.


A simple test of irreducibility is mutability over time. Does a field of the data change over time, or is it consistent over time? If it is mutable, it probably should live in another resource and be linked to it.

An example of this would be an Address resource. I often see Addresses represented like this:

  • Name
  • Street1
  • Street2
  • City
  • State
  • Zip
  • Country
  • Email
  • Telephone

Consider Name. Does Name change over time WRT the other fields? Name is someone who lives or works at this address, which is naturally mutable. One person moves out, another moves in. This may happen every year in an apartment building. Thus, Name cannot be an integral part of an address, but a field of a Person or Company resource. A link from Person to Address will establish where someone lives or works.

Email's place in Address is more tenuous. Email is closer to the Person, and can be found either on Person or more correctly linked to Person. Email addresses are shareable between Persons, and a Person often has more than one email address. However, emails do not generally transfer from one person to another over time like an address would.

Email reveals that this Address resource was probably designed from a Account Registration form. The business goal of Address is to capture a customer's address for shipping, but the resource to accomplish this goal is flawed. The flaw lies in the fact that it serves two purposes--data collection and data retrieval. In our API we decided to create an inbound Registrations resource to submit new registration data. The results of this submission surfaces across the API into resources like profiles, addresses, emails, telephones, etc. Not to divert the thread here, but this is essentially the CQRS pattern--commands to mutate state have a different model than the queries to view state.

Conceptual Atomicity

Telephone, unlike Email, is not quite as immutable as address. A business telephone will not change that often, if ever. Telephones for people can move around, especially when they are attached to a cell phone. Telephone is clearly not part of Address but a separate linked resource. It is not part of the Person resource because it can be shared by multiple people in a household, or a business.

The atomicity of telephone is challenged by societal changes, however, as people port and move their phone numbers to new providers and plans. It is thus tempting to just make telephone a part of the Person. In reality, though, people don't normally associate a phone number as an integral part of their existence. They will have more than one number, especially a traveller with several SIMs. The phone is a contact point to the person, and that fact drives it out of Person and into a standalone, atomic, immutable Telephone resource. Once you define a telephone number, it won't change. The relationships will change, of course, but not the data itself.

Immutability + Atomicity == Long Term Reliability

Once you achieve a normalized API definition, you achieve long-term reliability. By long-term, I mean years, possibly even decades. A normalized API is standardizable too, given that everyone's experience of reality is highly similar (or not?). Your address and my address share the same shape and semantics.

Yeah, but SQL has SELECT

Anyone who has programmed in SQL knows the secret to success in relational databases is 50% modeling and 100% query design. The SELECT statement gives the consumer of the database a way to create optimized, aggregated, denormalized views of the data itself. Hypermedia APIs do not have such a tool in common use. Two concepts have emerged to solve this problem however:


A few APIs have added the ability to expand a GET to include related resources in the result. Elastic Path's Cortex API calls this feature zoom. An example would look like:


This call retrieves the cart resource and also retrieves specific linked resources: prices and lineitems; lineitems is further followed to retrieve the item and totals associated with each line item. With this one call a consumer can retrieve a denormalized view of the resource to render to the client.


This one is a bit of a stretch, but I have found that the more I develop Hypermedia APIs, the more I refer to RDF concepts. In fact, Brian Sletten's "Resource-Oriented Architecture Patterns..." may be the Gang of Four for Hypermedia API design. SPARQL is an RDF query language that combines data sources, schemas and query statements. It is not hard for me to imagine mature hypermedia APIs providing query capabilities that support SPARQL.

The Silver Bullet

Solving the query denormalization requirement is important, but full of unknown unknowns. There is a Hypermedia query silver bullet yet to be discovered, but no magic wand. Firing silver bullets requires just as much work and thought as regular bullets; the only difference is silver actually kills werewolves. We don't know exactly what kind of bullet will give Hypermedia the query power it needs, but it must be found to give Hypermedia a mainstream future.

Friday, May 16, 2014

The Hypermedia Leap of Faith

Sören Kierkegaard, a danish philosopher of the early 1800's famously coined the concept of the "leap of faith" to show that belief in God is not the end result of a line of reasoning. The Leap To (not Of, btw) Faith involves giving up on reason and logic and throwing oneself towards the precipice of belief. God will catch you and carry you across the chasm from your intellect to belief.

A not-dissimilar leap is required to move from a REST-ish API to the "Glory of REST" that is Level 3, or Hypermedia REST. In the Richardson Maturity Model, the sequential levels of REST are not different kinds of REST, but rather the logical progression of API change that leads to REST. They are steps of maturity to the end state. Think child, teenager, adult.

Level 1 and 2 simply aren't REST. Only an API that is at the last level, Level 3, can be considered REST. The other levels are the logical progressions one needs to take to get to the edge of the precipice before leaping to Level 3.

REST Is About Trust

Crossing the chasm from Level 2 to 3 requires trust. The API consumer must leave application control entirely in the hands of the API. A Level 3 REST API presents all that is needed, when it is needed, and the client merely discovers these affordances as they navigate the links. All the client has to know is where the starting point is and how to manipulate the links.

A Hypermedia API is a browsable API. No longer does the client developer have to construct a URI to do everything. No longer must the client developer know how to change the state of the system with an incantation (method call), or know how to read data from another part of the system. All data, all state transitions, all are provided by the Representations of State, Transferred to the client.

I mean this. It really is a leap. It is unfamiliar, and frankly scary, because we have always been able to create a service-like API to do our job, and a URI template to invoke it. REST does not provide malleable services, but rather linked resources with only four verbs to manipulate them.

All That has Been, Will Be Again

Fortunately for us this is not the first time we have had a relationship-based, 4-verb architectural pattern to leap onto. Remember our old friend the database? it has a couple of interesting aspects:

  • Resources (tables) that are linked together (join tables)
Conceptually a database and a proper REST API are highly sympathetic. So sympathetic, that the key skill in relational database design applies directly to REST resource design. In my next blog post I will share what this key skill is and how it can be used to create an API that lives up to the Hypermedia Hype.

Sunday, March 30, 2014

REST Beat SOAP, But Not The Internet


Back in the 1990s, when networks and the internet were providing a way for computers to talk to each other, Service-Oriented Architecture was this new, amazing way to bind software systems together using Objects. This was a really big deal, because developers had these cool OO languages, but had to resort to all kinds of crap (EDI, anyone?) to communicate with other machines. The major computer companies had very expensive ways to solve this problem. They created a new class of technology called the Object Request Broker, or ORB. Microsoft had DCOM; Sun had Java RMI; Sun (I know), IBM, DEC, and 700 other companies had CORBA.

These companies knew that if they could get their distributed OO tech into the architectures of the new SOA systems being created, they could lock in their technology for decades to come. Naturally, DCOM and RMI couldn't talk to each other, and they were not about to cross-license. A war developed, and it promised to be long and nasty.

The Peacemaker

Most customers realized what was coming, and they yearned for freedom from lockin. Their desire for SOA gave them a vision of the Service interface backed by any vendor's software. Fortunately, some employees of these same companies wanted the same kind of freedom. Big companies are really a bunch of little companies that have to work together, and one of the little companies inside Microsoft, the WebData team, was working with Dave Winer on SOAP. The Simple Object Access Protocol was an insurgent technology meant to make SOA a reality.

The pressure of the growing internet forced the parties towards interoperability. Everyone realized pretty quickly that SOAP was the answer to their SOA needs, and as a result the competing technologies died a fairly quick death. SOAP was the Versailles Treaty of the ORB Wars.

SOAP Became "COAP"

Another reason SOAP was so successful was that it was much easier to understand than the other ORB protocols. The problem with SOAP however was that it became really complex as everyone started pouring their Object-Oriented concepts into it. The complexity rose, the performance dropped, and the price of distributed SOA architecture climbed. SOAP over the internet stagnated.

Many people found that the Object part of SOAP was actually a big problem because it didn't really model what the Internet was all about. People wanted to share Resources, not Objects. Roy Fielding crystallized this in his famous dissertation and termed it "REST." The good news is, many internet developers already had practice with representations and state transfers. They avoided SOAP and used CGI-BIN instead.

CGI-BIN, the original REST

Before JSON, if you were a "hacker" you used the cgi-bin/ dir of your web server to write apps that could consume http forms for State Transfers. HTML would show a form, the user would enter data, submit, and presto, your perl script on the server would update the database. Simple, easy to understand, and easy to consume.

Big vendors hated cgi-bin and actively fought hard against it. Many, many articles and presentations warned against using cgi-bin (It's nonstandard and must be confusing; It's interprocess, not scalable; It's a separate process, running rampant! Hackers use cgi-bin as their back door!! cgi-bin is Open Source, and who knows what could be hidden in there!!!) CIOs and vendors accepted this line of reasoning and cgi-bin died by strangulation.

REST Beat SOAP, Yay!

The need for some kind of REST only grew however and lots of alternatives emerged (thanks to XMLHttpRequest.responseText()—another Microsoft Gift to the World), driving the RESTful API age. Even SOAP vendors added a "REST" button to make your old SOAP services be RESTful. "REST" became synonymous with anything Not SOAP and adoption took off. 

Clearly, SOAP is history and REST is the way to go for interconnected systems. The problem is, REST isn't really anything. You can't define something in the negative. Leonard Richardson has thoughtfully considered this problem and has developed the Richardson Maturity Model to help us understand the state of our world, but all this does for me is place us somewhere in the Middle Ages. Rome has fallen, but the Renaissance is still in progress.

The Internet Is Hand-Cranked 

When you look at the growth and size of the internet, you quickly realize that the Internet must be completely powered by human effort. People are sitting at their desks, or on the bus, or in traffic jams, pushing and typing and swiping away at it. The internet itself has maybe 20,000 APIs in play; perhaps more, but really, nothing like the numbers of domains in existence:

Go read Robert Zakon's report for proof that APIs are a niche on the internet compared to all the human-powered interactions. What we call REST isn't making much of a dent in how we use the Internet at all.

The Internet of Things

The coming tidal wave of the Internet of Things is starting to hit. Given where REST is today, we are in for a messy ride because what these Things need is not what we have on offer.

Hello HAL, Welcome to Cortex!

Back in 2011, Elastic Path decided our future was in Headless Commerce, and to succeed we needed to offer the best API experience possible....