Let’s say your company’s product is a mobile app that gets its data from an internal JSON API. The API, built using Rails, is a few years old. Response objects are large, request latency is high, and your data indicates mobile users aren’t converting because of it.
Review high level requirements
It can be tempting to immediately dig into your code and look for N+1 queries to refactor. But if you have the time and bandwidth, try to view this as a great opportunity to take a step back and rethink the high-level requirements for your JSON API. Starting with a conversation about the desired functionality of each endpoint will help keep your team’s efforts focused on delivering no more than is required by the client, as efficiently as possible.
Grab your team for a whiteboarding session and review your assumptions about the behavior of each API endpoint:
- How is this endpoint currently being used by the client?
- What information does the client require for display to the user?
- What needs to be done on the server side before sending a response to the client?
- How frequently does the response content change?
- Why does the response content change?
Look for performance improvement areas
With the big picture in mind, review your Rails code to identify opportunities for improving performance. In addition to those N+1 queries, keep an eye out for the following patterns:
The response object has properties the client doesn’t use
If you’re using #as_json
to serialize your ActiveRecord models, it’s possible
your application is returning more than the client needs. To address this,
consider using ActiveModel
Serializers
instead of #as_json
.
The delivery of the response has unnecessary dependencies
Let’s say your API has an endpoint the clients uses for reporting analytics events. Your controller might look something like this:
class AnalyticsEventsController < ApiController
def create
job = AnalyticsEventJob.new(params[:analytics_event])
if job.enqueue
head 201
else
head 422
end
end
end
Something to consider here is whether the client really needs to know if enqueueing the job is successful. If not, a simple improvement which preserves the existing interface might look something like this:
class AnalyticsEventsController < ApiController
before_filter :ensure_valid_params, only: [:create]
def create
job = AnalyticsEventJob.new(analytics_event_params)
job.enqueue
head 201
end
private
def ensure_valid_params
unless analytics_event_params.valid?
head 422
end
end
def analytics_event_params
analytics_event_params ||= AnalyticsParametersObject.new(
params[:analytics_event]
)
end
end
With these changes, the server will respond with a 422 only when the request parameters are invalid.
Static responses aren’t being cached effectively
It’s possible your Rails application is handling more requests than necessary. Data which is requested frequently by the client but changes infrequently – the current user, for example – presents an opportunity for HTTP caching. Think about using a CDN like Fastly to provide a caching layer.
What’s next
The next step after implementing optimizations for performance is to measure performance gains. You can use tools like JMeter or services like BlazeMeter and Blitz.io to perform load tests in your staging environment.
It’s good to keep in mind that through the process of evaluating and improving your Rails application, your team may discover your API is out of date with the needs of the client. You may also see opportunities to move processes currently handled by your Rails application (e.g. persisting and reporting on analytics events) into separate services.
If an API redesign is in order and the idea of non-RESTful routing doesn’t make you too uncomfortable, you can explore the possibility of adding an orchestration layer to your API.