Async programming can make your apps faster. I’ll share how you can use async in Ruby on Rails to speed up your app. While there are examples in Ruby, the principles apply to any language.
I’ll group the examples into two basic principles. Here’s the first one:
Don’t do now what you can do later
Delay doing stuff as much as possible. Being lazy is not necessarily a bad thing. In practice, that means a few things:
Pay attention when you use a method that ends with _now
. They’re strong
candidates for things that can be done async. A common example is sending emails.
Imagine a Rails controller that sends an email after a user registers:
class RegistrationsController
def create
@registration = Registration.new(params)
if @registration.save
RegistrationMailer
.welcome_email(@registration)
.deliver_now
redirect_to @registration
else
# ...
end
end
end
The request doesn’t need to wait for the email to be sent to complete. Using
deliver_later
here can make the request faster. The same applies to any other
kind of job! If you’re saving statistics, processing data, or
something else that doesn’t need to be done right now, perform_later
.
You can also delete Active Storage files async with purge_later
:
class User < ApplicationRecord
has_one_attached :avatar
end
User.first.avatar.purge_later # enqueue a job to delete the file
And, since Rails 6.1, you can delete dependent associations async with dependent: :destroy_async
:
class Team < ApplicationRecord
has_many :players, dependent: :destroy_async
end
class Player < ApplicationRecord
belongs_to :team
end
Team.destroy_by(name: "Flamengo")
# Enqueued ActiveRecord::DestroyAssociationAsyncJob
You can configure the maximum number of records that will be destroyed in a
background job by the dependent: :destroy_async
association option.
Cool, so that’s the first principle. But here comes the second one:
Don’t stand still
Being lazy is nice, but you cannot wait doing nothing! Consider this example:
puts(
Benchmark.realtime do
5.times do
Net::HTTP.get(URI.parse("https://httpbin.org/delay/2"))
end
end
)
Because the requests are synchronous, the total time will be around 10 seconds (+ some network overhead). What’s bad is that the code doesn’t do a lot: it is mostly waiting on those requests to complete. Visually, it executes like this:
We could be proactive and start making more requests while we wait for the
previous ones to complete. Here’s what that would look like using the async
gem:
puts(
Benchmark.realtime do
Sync do
5.times.map do
Async do
Net::HTTP.get(URI.parse("https://httpbin.org/delay/2"))
end
end.map(&:wait)
end
end
)
Not a lot of changes, but that now only takes about 2 seconds to finish! We’re firing another request as soon as possible, which basically means that we’re waiting for them to be completed in parallel.
Here’s a visual representation of what’s happening:
This used HTTP requests as an example, but try to apply this principle to any other kind of I/O-bound operation. File operations, system calls, and database queries are good candidates for this kind of optimization. Speaking of database queries…
Async database queries
Since Rails 7, you can use
ActiveRecord::Relation#load_async
to run a database query in a background thread. This is useful when you want to
load a relation in the background, but you don’t need the result immediately.
So, say we have a controller that does several queries to render a page:
class ReportsController
def create
@new_authors = Author.recent
@new_books = Book.recent
@new_reviews = BookReview.recent
end
end
If each query took 1 second to run, the total time here would be 3 seconds. But,
if we use load_async
to run them in parallel:
class ReportsController
def create
@new_authors = Author.recent.load_async
@new_books = Book.recent.load_async
@new_reviews = BookReview.recent.load_async
end
end
Then the total time would be around 1 second! Again, we don’t need to wait for one query to complete to start the next one. The Rails logs will show that:
ASYNC Author Load (1010.2ms) (db time 1011.4ms)
ASYNC Book Load (2.2ms) (db time 1013.8ms)
ASYNC BookReview Load (0.2ms) (db time 1014.7ms)
The first number column shows us the time the query took to run in the foreground thread, while the second column shows the total time the query took on the database.
As with any promises of performance improvements, there are trade-offs here.
When using load_async
, we’re using more resources (database connection
threads) in a single request. This can be a problem if you’re using that on a
part of the app that’s under heavy load, because one or a few users might
exhaust the connections and other users will have to wait (and possibly
timeout). So, be careful with load_async
!
A good use case for this, though, is when you have an HTTP request and a database query that can be done in parallel:
class BooksController
def show
@new_books = Book.recent.load_async
@external_books = HTTP.get("https://external.com/books")
end
end
Async views
I don’t think Rails renders partials in parallel, but you can use Turbo Frames to load parts of the page in parallel. Just give it a URL and it will load its content from that route. Lazy-loaded frames are particularly useful for parts of the page that are not critical to the user experience or are heavyweight.
Add a turbo frame to your view:
<turbo-frame
id="best_sellers"
src="books/best_sellers"
loading="lazy"
></turbo-frame>
Write a controller action that renders the frame content:
class BooksController
def best_sellers
@books = Book.best_sellers
end
end
And a view that renders the content:
<turbo-frame id="best_sellers">
<h1>Best Sellers</h1>
<%= render @best_sellers %>
</turbo-frame>
And that’s it! If you have several frames on a page, they will load in parallel. Of course, this means more requests to the server, so keep that in mind. Also, don’t overdo it. It is frustrating for the user to see the page loading in and then “load content again” every time (looking at you, SPAs).
If you have something costly to render or that not every user needs, you can push it outside of the initial viewport and load it lazily with a turbo frame.
Async assets
Extending this concept further, we can do the same with assets. For instance,
you can set the async
attribute on your script tags to load them in parallel:
<script blocking="render" async src="async-script.js"></script>
Splitting critical and non-critical CSS can also help. You can lazy-load fonts
with font-display: swap
, which will render the text with a fallback font while
the custom font is loading.
For images, you can lazy-load them with the loading="lazy"
attribute. This
works the same way as Turbo Frames, loading the image only when it’s about to
enter the viewport. Rails even has a config option to lazy-load images by
default.
All of this will help your page to display faster, instead of seeing a blank screen for a long time.
Adding indexes concurrently
Finally, I’d like to mention some async tools that you can use in development or on more “behind the scenes” scenarios. The first one is if you’re using PostgreSQL.
When adding indexes to a Postgres table, it blocks the table for writes. This
can lead to downtime in production if you have a big table. Luckily, we can use
the concurrently
option to create the index without blocking the table:
class AddIndexToUserRoles < ActiveRecord::Migration
disable_ddl_transaction!
def change
add_index :users, :role, algorithm: :concurrently
end
end
The caveat here is that –quoting the docs– this method requires more total
work than a standard index build and takes significantly longer to complete.
However, since it allows normal operations to continue while the index is built,
this method is useful for adding new indexes in a production environment. Of
course, the extra CPU and I/O load imposed by the index creation might slow
other operations.
Running tests in parallel
Rails 6 introduced parallel testing. All you need to do is to specify how many workers you want:
class ActiveSupport::TestCase
parallelize(workers: 2)
# or let it figure out looking at the number of CPUs
parallelize(workers: :number_of_processors)
end
The math is simple (because I’m simplifying things): the more workers you have, the faster your tests will run.
Workers | Test Suite Time |
---|---|
1 | 40 min |
2 | 20 min |
4 | 10 min |
Unfortunately, RSpec doesn’t support Rails’ parallel testing out of the box.
Several gems implement this behavior for RSpec, though. Some examples are
parallel_tests
and flatware
.
Make sure you have the basics right before going async!
Async can make your app faster, but it also can make the code more complex in the process! You might feel like you have less control of how things are executing and errors can become harder to debug.
That is to say that you should do your homework before going async. Don’t use these techniques as band-aids for real performance problems. And by that, I mean basic things like adding indexes to database columns, fixing N+1 queries, using low-level caching and view caching where they make sense, and generally following good Ruby and Rails practices.
Use these principles with wisdom. As with any simplification, they can be wrong in some cases. These are not rules! There are plenty of cases where being “proactive” is better than being “lazy” (precomputing values, for example). But, I hope this serves as a starting point for you to start thinking about async and Rails a bit more.
Want to speed up your development?
Learn more about partnering with thoughtbot to elevate your processes and speed up development.