A few weeks ago, we quietly shipped a new feature here on the Giant Robots blog: comments. If you hover over a paragraph or code block, a small icon should appear to the right, allowing you to comment on that section of any article.
With the release of this feature, we can now say something we’ve hoped to say for some time: we shipped Haskell to production! In this post, I’ll outline what we shipped, how it’s working out for us, and provide some solutions to the various hurdles we encountered.
The answer to this question depends on who you ask. A number of us really like Haskell as a language and look for any excuse to use it. This may stem from safety, quality of abstraction, joy of development, or any number of other positives we feel the language brings. Some of us are recently exposed to Haskell and would love to have something being actively developed that we could pair on from time to time and get more exposure to a language so unlike what we’re used to.
Ultimately, we want to know if Haskell is something we can build and scale for client projects. If a client comes along where Haskell may be a good fit, we need to be confident that, beyond writing the code, we can do everything else that’s needed to deploy it to production.
During the development of this service, much of what is said about the benefits of type safety when it comes to rapidly producing correct code proved true. The bulk of the API was written in about a day and subsequent iterations and refactorings went smoothly using a combination of Type Driven Development (TyDD) and acceptance tests. For programmers used to interpreted languages, the long compiles were frustrating, and we did have some small battles with Cabal Hell. That said, the introduction of sandboxes and freezing are a definite improvement over my own previous experiences with dependency management in Haskell.
Writing an API-only services meant working with a lot of JSON. Doing this via the aeson library was concise, and provided safe (de)serialization with very limited validation logic required on our part. A large number of validations that we would typically write in a Rails API service are handled by virtue of the type system.
Libraries exist for most of the things we need like markdown, gravatar, and heroku support. One notable exception was authentication via OAuth 2.0, which we needed because we wanted to use our own Upcase as the provider. While Yesod has great support for authentication in general and there exists a plugin for OAuth 1.0, the only thing we could find for OAuth 2.0 was an out of date gist. Luckily, it wasn’t much trouble to move that gist to a proper package, ensure it worked, and publish it ourselves. Even though Yesod didn’t ship with this feature out of the box, the modular way in which authentication logic is handled allowed us to add it as a separate, isolated package.
Part of this experiment was to develop in Haskell using as much of our normal process as possible. That meant deploying to Heroku. Because a clean compilation of a Haskell application (especially with libraries like Yesod or Pandoc) can take some time, the 15 minute build limit became an issue.
Before you mention it, yes this pain point could’ve been avoided with a binary
deployment strategy. We could have compiled locally in a VM (to match Heroku’s
architecture) then copied the resulting binary to the Heroku instance. But
that’s not our normal process. Developers should be able to
git push heroku
master and have it Just Work.
And in theory, it could just work. Builds are largely cached so it’s only the first one that’s likely to go beyond 15 minutes. To mitigate this, the most popular Haskell buildpack supports a service called Anvil for running that first build in an environment with no time limit. After many attempts and vague error messages, we had to give up on these Anvil-based deployments. We were on our own.
In the end, we were never able to come in under 15 minutes, even after upgrading to a PX dyno. Our Heroku representative was able to increase our app’s time limit to 30 minutes and so far we’ve been able to make that. I wouldn’t consider this typical though: I suspect our dependency on pandoc is causing compilation to take longer than most Yesod applications. I recommend trying the standard build pack and hoping to come in under 15 minutes before attempting to subvert it.
Once successfully on staging, we noticed another issue. Users were getting logged out randomly. It turns out the default session backend in Yesod stores the key to a cookie-based session in a file. This has a number of downsides in a Heroku deployment: First of all, the file system is ephemeral. Any time a dyno restarts, all sessions would be invalidated. Secondly, we had two dynos running. This meant that if you logged in on one dyno, but a subsequent request got routed to the second, you’d be logged out. To support this scenario, we defined an alternative backend that reads the key from an environment variable that we could set to the same value in each instance. This improvement was eventually merged upstream and is available since Yesod 1.4.5.
We definitely consider this experiment a success. We solved a number of deployment problems which should make our next Haskell project (which is already in the works) go that much more smoothly. All in all, we found the language well-suited to solving the kinds of problems we solve, thanks in no small part to Yesod and the great ecosystem of available libraries.