The Good, the Bad, and the Ugly of REST APIs

By George Reese
June 4, 2011 | Comments: 49

Adrian Cole of jclouds and I have written a lot of code against a variety of SOAP and REST cloud computing APIs. We've seen a lot of the good, the bad, and the ugly in API design and we tend to complain to each other about the things we dislike. This article sums up my thinking on the subject (not necessarily Adrian's, though Adrian reviewed the document and gave me additional ideas).

The Good

Supporting both JSON and XML

I know you love {JSON,XML} and you think everyone should be using {JSON,XML} and that the people who use {XML,JSON} are simply stupid. But you're not building an API for yourself, you are building it for everyone, including those who think {XML,JSON} rocks and {JSON,XML} sucks. I don't want to get in the technical merits of either language or even the possibility that there might be distinct use cases for JSON vs. XML. The bottom line is that there are people out there who want to consume APIs in both languages and it's just not hard or complex to support both.

REST is good, SOAP is bad

Did I just get finished telling you not to make technology decisions for your end-users? Yeah, but this is different. Really. SOAP is absurdly complex to implement and support when your target consumers use many different programming languages. Furthermore, with SOAP, you are (absent some tricks) forcing XML on people.

Meaningful error messages help a lot

When building an API, it's too easy to think about the right way for things to work and you often fail to think about all the mistakes people can make when learning to use your API. I make a lot of them when faced with a new API. Please tell me what was the error of my ways! For example, if "Authentication Error" can mean 10 different things, provide some message that gives me a clue as to which of those 10 different things I got wrong.

Providing solid API documentation reduces my need for your help

Solid API documentation enables an individual to both understand the simple examples that help them get started as well as the complex ways in which they can leverage the API to accomplish advanced tasks. In short, an API document should minimally define:

  • Supported operations

  • Examples that include the request, all headers, and any response

  • Required and optional attributes to JSON/XML data

  • Default values

  • Error codes

Map your API model to the way your data is consumed, not your data/object model

The data in your API calls should not look like highly normalized representations of database tables. They should represent a model of the data in a way that makes sense to API consumers. When you map APIs to your data/object model, you often end up with a chatty API (see "The Bad").

The Bad

Using OAuth authentication doesn't map well for system-to-system interaction

OAuth is a terrible API authentication scheme unless the sole target content for your API is a browser. Even then, you're not talking about an API, you are talking about structured content. Don't use OAuth to authenticate your API. It's designed to represent a specific user in a specific transactional context and is not terribly useful for representing external systems. It's also overly complex for API authentication needs. And while you're at it, don't use HTTP authentication either. Use signed queries that authentication each API call individually.

Throttling is a terrible thing to do

Throttling is something you do when you think your API consumers are using too many resources. There are legitimate reasons for throttling. In particular, throttling to minimize the impact of a DDoS attack or bug in consumer code. If you are going to use throttling as a technique to protect against these problems, you need to implement some very intelligent throttling that can a) recognize legitimate traffic like testing and regular polling and b) minimize the negative impact of false positives. Avoid limits based on "wild ass guesses" and consult with customers who might be impacted. Develop different throttling profiles and increase limits as the amount of resources being referenced for a consumer grow. Finally, if you do need to throttle your users, notify them that throttling has been triggered (also warn them if you previously never throttled but are implementing a new throttling system).

And while we're at it, chatty APIs suck

Chatty APIs often inspire people to implement throttling. A chatty API is any API that requirements me to do more than a single call to perform a single, common function. The details of what constitutes chatty, of course, depend on what people might reasonably want to do with your API. For example, I often write code against cloud Infrastructure as a Service (IaaS) clouds in which a virtual machine may be assigned an IP address. Consider determining what IP address a server is attached to. When I list servers, I will want to know their IP addresses; when I list IPs, I likely want to know if they are assigned to a server or load balancer. An IP address entry in the response to list IPs should therefore contain the ID of the server or load balancer (if any) to which it is attached so I don't have to follow this simple query up with a call to list virtual machines and list load balancers!

The Ugly

Returning HTML in your response body

I have mostly encountered that when getting 500 errors from an API call, particularly when there's something forcing an API proxy to generate a response because it's not getting anything meaningful from the API server. But I've seen this for lesser scenarios as well. No acceptable reason exists for responding with HTML (or some other unexpected content type) in a response body. Never. Ever. Make sure your proxy server knows to generate valid JSON/XML even when it can't talk to the API server. Apache proxies are capable of responding with JSON/XML. If an API consumer EVER sees HTML, you are doing something very, very wrong.

Failing to realize that a 4xx error means I messed up and a 5xx means you messed up

Stop mixing these up! 400 error codes means that something is wrong with the incoming request. No matter how bad my request, you should never respond with something in the 500 range. If I send random binary content in a POST operation instead of JSON (or XML!), that's a 415 error. If I receive a 500 error from your server, I'm going to fill out a trouble ticket with you because I have every right to believe something is wrong with your server. And while you may think this is a distinction without a difference, I assure you as a developer against many APIs, the error code impacts the way I handle debugging and troubleshooting. If I get a 400 error, I generally exhaust every option available to me before seeking help. If I get a 500 error, I rightfully assume something is broken on my your side, look for help, and don't waste my time until I hear from you.

Side-effects to 500 errors are evil

Assuming you are a good citizen and you are returning proper error codes for different kinds of errors, you should also make sure that calls that generate 500 errors rollback any changes that might have occurred in the process of execution. In other words, the API consumer should be able to retry the same action once the scenario on your end that caused the error is cleared up.

You might also be interested in:


In the spirit of "The Princes Bride", and tongue firmly in cheek - I submit this for your consideration:

Dr. Fielding (a la Inigo Montoya) might also have responded with "You keep using that word [REST]. I do not think that word means what you think it means"

Good stuff! Here are two more:

If you're returning JSON wrapped in a callback--which should really be known as "JavaScript," not "JSON-P"--please be sure to send any errors back the same way.

Also, please remember to send everything with status code 200, or your customer's browser will never see it.

Building web APIs for internal use, I mostly agree.
However, for authentication, use HTTP auth! Either digest auth for cleartext requests (which is the standard way of implementing the hashed request auth you say you want), or better in my opinion, use https for all traffic and basic-auth.
There is *no* reason to re-implement that particular wheel!

There are two problems with HTTP authentication:

#1 It's a medium security level authentication option and not appropriate for high security contexts

#2 It's a session-based system and wildly inappropriate for non-session contexts, which includes most system-to-system contexts.


There are definitely some good points in your original post, however I would like to respond to a specific comment you made regarding authentication:

"#1 It's a medium security level authentication option and not appropriate for high security contexts"

The comment is still valid, creating your own security mechanisms is inappropriate in most cases. Rather than signing each request using some sort of homegrown method, use TLS with both client and server certificates. For the vast majority of publicly consumed web services, HTTP basic authentication is just fine when the consumer uses the API directly and OAuth is as good as we'll get right now when the consumer is using another web application that is consuming the API on their behalf.

As an aside, the number of "high-security" environments is likely significantly smaller than the number of low security environments, thus making it the edge case not the norm. Build for the norm unless you are servicing the edge cases specifically.

"#2 It's a session-based system and wildly inappropriate for non-session contexts, which includes most system-to-system contexts."

As far as I can tell, while the RFCs do mention authentication sessions they're referring to client-side cached state. The Authorization header still needs to be sent on each request, either preemptively for performance reasons or as a response to the WWW-Authenticate challenge header, thus I don't think it's wildly inappropriate and in fact the wide usage of it in many APIs is sufficient proof that it is often quite appropriate.

Throwing in my experience consuming REST APIs in many languages:

1.) Either Basic or Digest may not be available in a language's standard library or, if it is available, it's stupidly broken in some way. If I recall, I specifically had problems with cURL in PHP and had to roll my own Authentication header.

2.) Even rolling your own, Basic/Digest Auth is very simple, and "secure enough" over SSL/TLS.

I like the client SSL certificate approach, though, and wish more developers would use that for API authentication. Having identity de-coupled from the code is really nice.

To anyone who wants to roll their auth scheme, a word of advice: don't. There are plenty of "state of the art" crypto algorithms like HMAC which do this for you, usually available as a library in your favorite language. Long story short: any standardized algorithm is going to be better than what you come up with, no matter how smart you are — it's already been in the wild and ripped apart by cryptanalysts across the world. Related reading:

A couple of companies ago I worked out a security mechanism that used SSL/TLS with client-side certs. You are right that it made things simpler to separate the security gorp from the API logic. The downside was that, operationally, it was expensive to manage. We eventually had to roll out a custom "provisioning app" that drove the process of generating the keypairs/certs, registering them in LDAP, downloading them to the client, putting them in the correct cert store, etc.

The title of the article should have been "The Good, the Bad, and the Ugly of badly designed REST APIs"

Regarding "Providing solid API documentation reduces my need for your help"

If you document an API, you API immediately ceases to have anything to do with REST. The contract in RESTful systems is the media types, *not* the API documentation.

I suggest you move that section to "The Bad"!



That might be one of the stupidest comments I've ever had made to one of my blog entries.

And I've seen a lot of stupid comments.

That might be a bit harsh, especially when you are trying to educate.

And, you have to admit, Roy's thesis does imply (or even say?) something like this.

Any time someone suggests that good documentation is a bad idea, they deserve getting hit over the head with a big stick.

At any rate, there are two things I think about Roy's thesis.

#1 I don't think it implies that the entity format, authentication format, and headers should not be documented.

#2 I think he defines REST so narrowly that it's useless.

He didn't say documentation was bad, he said that an API that depends on documentation is not RESTful, and he is absolutely correct. REST is about allowing services to evolve independently of the clients that consume them and they can't do that if they are coupled to each other by a static specification. A RESTful platform can be documented, but services built on that platform should not need to be.

Regardless of what you think of it, Roy's definition of REST is canonical. If you find it useless for APIs, that's probably because the architecture it describes, that of the Web, was not invented for APIs but for human interaction. If you've come up with something different, you should give it a new name.

He said to move the publishing of good documentation to "the ugly". That's just inane.

And if your argument is that Roy's definition of REST doesn't apply to APIs, then he has nothing to say about the definition of a REST API. If his definition of REST applies to human interaction, then it's possible that there's something out there called a REST API that leverages core principles of REST an applies them to system-to-system interaction?

Pretty cool concept, eh?

If your goal is to make a REST API then the need for documentation is indeed "bad". One of the core principles of REST is that clients and servers share no a priori knowledge beyond a uniform interface and generic media types. That constraint rules out the vast bulk of what are usually called APIs. The best attempt at RESTful machine-to-machine interaction that I know of is RDF.

Sadly, the term REST has been widely abused to refer to what is largely its antithesis: application-specific network protocols. The real REST is an analysis of how the web works and why it is successful. Anybody who is helping to build the web should understand REST accurately, and the abuse works against that.

Any time someone suggests that good documentation is a bad idea, they deserve getting hit over the head with a big stick.

Absolutely. Document the hell out of the media types. But "API Documentation"? What would that be? GET, PUT, and friends? They're pretty well documented already. The URLs? That's just silly; the client should be treating those as uninterpreted blobs.

Actually, Jan Algermissen knows exactly what he's talking about -- he is one of the most respected & thoughtful REST proponents out there. REST is about what happens over the wire -- documentation has absolutely nothing whatsoever to do with REST. The reason the Web took off like it did was that systems did NOT need documentation in order to interoperate. While I think you make valid points in your piece, I'd recommedn you change the title. Much of this is simply not about REST. I'm not advocating soem sort of purist stance -- it just a matter of using some terms correctly. Nobody is saying REST == always good, not REST == always bad, as if is some sort of qualitative measure. REST is simply a set of architectural priciples aimed at making robust, flexible, and evolvable systems. There are other valid ways to approach system architecture. On the Web, REST is simply a good way to go.

For one, we don't really know that this is Jan Algermissen. It's just some guy spouting off in blog comments. In either case his "authority" shouldn't be a part of the equation.

For two, I don't see how the documentation George Reese is suggesting conflicts with REST principles. It's flat out impossible to know how to operate on non-trivial entities without some explanation of how they should be structured. Obviously seeing the structure can help, but it's positively ridiculous to expect that the structure and names will be enough.

I agree there's a thin line to walk here. You don't want to document anything RESTful techniques can express for you. That you can misuse it doesn't mean you should abolish all forms of documentation.

Saying that the systems didn't need documentation to interoperate is wrong. That is why we have things like HTTP 1.1/0, HTML 4.01, XHTML, CSS, etc. This is all documentation that describes what the terms and fields of the representations commonly used in RESTful services use. If you use JSON, then most definitely need to provide some information on what exactly a hyperlink looks like in your system. You should provide some insight into what methods are appropriate for different URLs.

I understand that it is better to do things like use hypertext instead of URL patterns for discoverability, but that doesn't mean that when you publish an API your application can derive what the service does by trial and error. That is not to say it wouldn't be extremely cool to simply have a single URL for a service and let some intelligent REST client derive what it does. It just isn't practical :)

That's well and good, but again not REST. I recommend that everyone here go read: .

I'd also note: there is a *world* of difference between documentation for a specific service and a widely implemented standard.

I'd like to point out that I didn't say not to use hypertext, but to be clear, you need to define what that hypertext will look like depending on the media type. Where practicality comes into play is when the client needs to have a generic type (application/json), yet there are actions defined by hypertext.

Agreed. It's almost as bad as someone claiming that "HTTP Authentication" is "session based."

Well, I definitely won't be coming back to your site or reading your work anymore.

RESTLESS APIs are the way to go. Bytechurn is good.

Reading your comment made me think that you might want to elaborate particularly around documentation through media types.

I say this because I think your wording could leave it as nothing more than flamebait, meaning your important message is being lost.

The main thing all api creators should do also is make sure they regularly test their api, set up some unit tests in a cron etc or something. The amount of times some company offers an api for the sake of it which is limited at best but then don't test it annoys me. Also in the documentation help novice programmers by providing coding examples using the api in different languages i.e. PHP, ASP,.NET, RUBY, PYTHON

What does "{JSON,XML}" mean? I have not seen it used before and it's hard to search for.

Nothing technical. In other words, substitute in the text either JSON or XML as appropriate depending on whether you are an XML or JSON zealot. My way avoiding picking one in my example.

It isn't syntax, it's fill-in-the-blanks! XML and JSON are just different formats for sending structured data as text, and people with backgrounds in different technologies often have more experience using one or the other.

For example, XML would zealots read:

"I know you love XML and you think everyone should be using XML and that the people who use JSON are simply stupid. But you're not building an API for yourself, you are building it for everyone, including those who think JSON rocks and XML sucks."

It is an excellent point, and we'd all do well to think a little more about our audience - API consumers are users too. Thanks for all the great insights, George!

I appreciate the insight, however, it's basically a modern vocabulary to describe something the mainframe guys did 50 years ago.

Could you please explain briefly what you mean when you suggest using "signed queries that authenticate each API call individually"? Not quite sure what you mean by this, and I'd love to know, because I'm never quite sure what to use for API authentication.

I think something like what Google use works:

I am interested in more in this regard too, especially in an environment where other machines need talk to my API (with no typical "user" using the machine).

Basically, you should never pass private credentials across the wire for API validation. Instead, you should sign your query using a shared secret and a common hashing algorithm so the endpoint can deconstruct the query, generate what should be the signature, and compare the two values.

Most cloud computing APIs use some variation on this concept. You have a shared password on both sides and a contract (API, not legal) dictating components of your query will be signed. Ideally, these components differ for different queries and cannot be constructed in a way that generates a collision. For example, the contract might specify:

BASE64(SHA256(lower_case(params) + ':' + timestamp + ':' + nonce))

This means I generate a signature that takes the lower case value of my request parameters with the timestamp and nonce appended onto the query. I sign that with my password.

On your end, you reconstruct what the signature *should* be based on the password you have on file and compare it to the signature I put in the header. If they match and the nonce is not repeated, you allow the request.

The problem with http params lower cased is the risk of collision. So you generally want to do something about how you manage that piece.

I'll chip in here and say that there's a large number of possible attacks when you start signing only selective portions of your request. Be very, very, very careful with this approach.

Using strong crypto - SSL for a somewhat tongue-in-cheek example - can remove most of these edge cases and dramatically simplify your security model.

I agree with most of the points but as far as supporting XML - yes, this will not hurt but I don't see how ONLY supporting JSON can make it a bad api?

All, I mean all programming languages now have json parser libraries. I can't imagine a situation where someone who really needs your API would not be able to use it just because it's JSON and not XML.

I mean, sure there are lots of elderly programmers out there who want nothing to do with JSON but c'm on, accept it already.

It is a bit of a pain in java because of typing issues. JAXB works pretty well for XML as long as a schema is provided. Yes gson, and Jackson improve this, but are basically reimplementations of JAXB.

-- there are lots of elderly programmers out there

Ah come on. xml is the realm of kiddie koders blissfully ignorant of the fact that they're imitating IMS in plaintext.

I wanted to chat about the "For example, if "Authentication Error" can mean 10 different things, provide some message that gives me a clue as to which of those 10 different things I got wrong." comment.

I completely agree when dealing with non auth context requests that error messages should be meaningful and help the caller know what they should have done. I do, however, think that anything involving auth should be structured in a way that no additional information is given. Because of this it makes sense to return the same generic error for all auth type requests.

Would you mind elaborating on what types of auth responses you would be looking for to return meaning answers?

My main point wasn't so much authentication errors, though authentication errors were on my mind because people often overload them with other meanings.

Having said that...

In a typical interactive context, you are dealing with weak authentication tokens (in other words, a user name/email and password) with only a couple of failure scenarios: No such user, wrong password, account locked out/suspended.

Best practices suggest that you provide no direct indication to a user while the authentication failed because the information is very valuable to a potential attacker and of limited value to a valid user (they generally understand the possible reasons for failure).

That's not the case with non-interactive contexts.

First, in my experience, people publishing APIs throw 401 and 403 errors for all kinds of reasons. I've received a 403 error, for example, from several cloud providers because they were throttling my API usage during unit testing!

So, that's not really a good reason for what I was saying above. The right answer is to use proper error codes (in this case, a 503 with a custom JSON/XML error message).

However, the actual permutations on lack of authorization in a non-interactive scenario are more complex. No such account, bad signature, invalid timestamp, invalid nonce, etc. Furthermore, when using strong authentication credentials, the value of knowing there's no such account xp76Ng32dlf999b3xnzQ0qzNs is of limited value for a brute force scenario

One of the hardest things about dealing with new APIs is getting down the authentication routine. It helps A LOT to know in particular if you have ancillary details wrong like the signature, nonce, timestamp, headers, etc.

I thought the comments about "meaningful error messages" on authentication errors were amusing. It made me think of a "meaningful" error message like: "You have supplied the wrong password for user 'fred'. The correct password is 'dxchu74KL'."

Is your criticism of OAuth inclusive of the 2.0 spec and existing 2 legged implementations?

A few more:

Don't be strict on content-type. If you only accept one content type then there is no benefit in rejecting request that don't have this setup correctly.

Don't overload HTTP error codes. If I see 404 I think the url is wrong, not that the id I specified doesn't exist. Return a custom JSON/XML message with a custom error code.

I see so many APIs that return HTML, annoys me every time. Even worse when people think the best idea on a failure is to redirect to the site home page.

Interesting list, thank you. My response/comments in a blog post:

Excellent commentary, thanks!

From your experience how should be errors codes? Combination of 2 numbers? Like HTTP 422 and in the response N integer codes? Or english text response?

Trying to understand the reactions to "Good" part...

Good stuff, only think the "no oauth" point might be harsh. Two legged OAuth is essentially what the author advocates with the statement "Use signed queries that authentication each API call individually." With the advantage you aren't rolling yet another customer authentication token system, and your clients can easily leverage a universe of libraries to call you.

I dont think the author truly understands OAuth. Agree wholeheartedly with your point, 2-legged is exactly "signed queries for each request".

News Topics

Recommended for You

Got a Question?