HTTP Safety Doesn't Happen by Accident

George Brocklehurst

Before the days of CSS, JavaScript, AJAX, and Web 2.0, there were two main ways for a Web user to tell their browser to make a request to a Web server:

  1. Clicking on a link to a different page, and
  2. clicking on a button to submit a form.

This distinction was helpful to users in the brave new world of the Web. Clicking on a link meant that you wanted to see a document: you were requesting information from the server, but that was all. You didn’t want anything to change. Submitting a form was different: it meant that you wanted the server to do something special; something tailored to you, and based on the information you’d provided. Submitting a form could run a program, or send an email, or even leave a permanent mark on the Web.

Under the surface, these two actions were making different kinds of requests. Clicking on a link was making a request using the GET method, which gets information from a server, while submitting a form was making a request using the POST method1, which sends—or posts—information to a server.

The distinction between these types of requests is important. The HTTP specification tells us that GET is safe, whereas POST is unsafe. Here’s how RFC 7231, the latest version of the HTTP 1.1 specification, defines safety:

Request methods are considered “safe” if their defined semantics are essentially read-only; i.e., the client does not request, and does not expect, any state change on the origin server as a result of applying a safe method to a target resource. Likewise, reasonable use of a safe method is not expected to cause any harm, loss of property, or unusual burden on the origin server.

This definition of safe methods does not prevent an implementation from including behavior that is potentially harmful, that is not entirely read-only, or that causes side effects while invoking a safe method. What is important, however, is that the client did not request that additional behavior and cannot be held accountable for it. For example, most servers append request information to access log files at the completion of every response, regardless of the method, and that is considered safe even though the log storage might become full and crash the server. Likewise, a safe request initiated by selecting an advertisement on the Web will often have the side effect of charging an advertising account.

The spec goes on to explain that this distinction is important because it defines the kind of requests that automated programs—anything from the Google search bot, to a browser extension, to a little Ruby script you might write on the command line—can reasonably make without inadvertently leaving a trail of devastation in their wake.

It also encourages people building user agents—typically browsers, but really any software that makes Web requests on behalf of a user—to think about the user interface associated with safe and unsafe requests:

A user agent SHOULD distinguish between safe and unsafe methods when presenting potential actions to a user, such that the user can be made aware of an unsafe action before it is requested.

On today’s Web, things aren’t so clear-cut as they were in the good old days of the link and the button. CSS can make buttons look like links, and links look like buttons. JavaScript can make buttons behave like links, and links behave like buttons, and make safe and unsafe requests without anyone clicking on anything at all.

But HTTP is still there, underpinning everything we do2. The distinction between safe and unsafe methods, between GET and POST, is still important. As we build the Web, we should always remember the platform we’re building on and how it works.

On the server-side, we should ensure that safe requests to our applications really are safe. Our Rails show and index actions shouldn’t write to our databases, make unsafe requests to third-party APIs, or make other changes in the world that our users neither wanted nor expected.

On the client-side, our complex and JavaScript-heavy applications often have more in common with user agents than documents, and so the things the HTTP spec requires of user agents fall to us as well. We should ensure that our users understand what’s safe and what isn’t. Viewed in this light, that old Rails idiom of link_to @blog_post, method: :delete doesn’t seem so helpful after all, and button_to would make things clearer. Suddenly, being more conscientious about popping up a confirmation dialog before our JavaScript triggers an unsafe request seems much more important.

The HTTP specification is full of insights that shed light on how we can build better Web sites. If you only read one RFC this summer, make sure it’s RFC 72313. And if you forget everything else in this post, just remember this one thing: GET requests don’t have side effects.


[1] Forms can also make GET requests (e.g. search forms should GET rather than POSTing) but a link can never make a POST request without JavaScript getting involved.

[2] Maybe not quite everything these days—WebSockets speak other protocols—but definitely still the vast majority of things.

[3] If you read more than one, you might enjoy one of the sequels, like RFC 7232; or the prequel, RFC 7230; or maybe even the art-house original version of HTTP/1.1 from 1999, before it was re-made in six parts with a big Hollywood budget, RFC 2616.