Recommending blog posts with artificial intelligence

In 2018 we created a home-grown service to link related articles on this blog. This service used older machine learning techniques, but they were pretty rudimentary, and the quality of the links wasn’t always that great. Since that initial implementation six year ago, we’ve published nearly 1,000 additional articles, and artificial intelligence techniques have advanced significantly.

Using these newer techniques, we’ve now reimplemented and relaunched this feature. All of the prior article links were removed, and everything you see now at the bottom of our articles is using the new system. We’re pleased with both the quality of the results being returned as well as the speed and cost of implementation.

AI implementation details

The new system works by using embeddings. When a new article is published, our Rails app gets an embedding from OpenAI. The embedding for an article is saved in Postgres using the pgvector extension, which is straightforward to install locally and supported on AWS. That makes it so that we don’t need to introduce a dedicated vector database. Our thoughtbot.com Rails app then uses the neighbor gem to be able to find similar articles.

This approach is nice because we don’t have an ongoing cost of reprocessing old articles or querying the OpenAI service every time we want to determine related articles. It’s a local SQL query.

We also have another ActiveRecord relationship for manually linking or overriding the links. When displaying related articles, any manual links will be shown first, and then the additional links, up to 3, will be retrieved using embeddings.

A small group of us spiked out an implementation of this new feature together during a recent investment time learning session. This took about an hour. This spike was important because it rapidly showed us what our approach would be when it came to implementation, allowed us to estimate how much effort the full implementation would be, and validated the quality of the links before investing further in that full implementation. This approach is important for AI projects.

Once we had validated the approach was viable, I went back and did a full test-driven implementation. This took about two hours. Once the feature was deployed to staging for testing, we needed to generate the embeddings for all articles. The cost to OpenAI for this was 41 cents and took about 30 minutes for our approximately 2500 articles. The same was true for production. We could have migrated the embeddings from staging to production, or made the embedding generation parallel to save time and money. I decided that wasn’t worth it for this small cost and time, but it might be worth it for a larger project.

Upcoming AI implementation livestream

I will be reimplementing a version of this feature in an upcoming livestream as part of our AI in Focus series, on December 5th. I hope you’ll join me on the stream where we’ll implement this together and I will take questions as I code about this feature, or any other artificial intelligence development questions you have.