AI in Focus: Refactoring Rails with AI tools

In this episode of our AI in Focus series, Chad is joined by fellow thoughtbotter Sarah Lima as they explore using AI coding assistants to identify and apply principles from the classic thoughtbot book Rails AntiPatterns. They explore two LLMs via Github’s Copilot tooling to help refactor a real production Rails application. Read on for the highlights and watch the full replay on YouTube.

Rails AntiPatterns

Rails AntiPatterns was published in 2010 when Rails 3.0 was the newest version. As we approach the book’s 15th anniversary, we’ve been thinking about revisiting these concepts and exploring what’s changed in Rails development best practices. We wondered if different LLMs would the know about the book’s principles and be able to apply them to real code?

The session focused on thoughtbot’s internal Hub app, which manages schedules and integrates with various SaaS products. We loaded a large Opportunity class (over 500 lines) that represents CRM records and potential projects into Copilot’s context window. Using GitHub Copilot’s “ask” mode, we started by asking: “Are you familiar with the concepts from the book Rails AntiPatterns? If so, what are some suggestions of the principles in this book that we could apply to this class?”

Like Cursor, which we’ve been using for other streams in this series, Github Copilot now gives you the option to select a language model, so we compared Claude 3.5 Sonnet and ChatGPT’s GPT-4o. Both knew about the AntiPatterns book and with several suggestions:

  • Fat Model AntiPattern: The class was over 500 lines with many responsibilities
  • Law of Demeter Violations: Many delegate calls indicating tight coupling
  • Nested Model Pattern: Complex associations and nested relationships
  • Using Enums": Replace hardcoded constants with Enums.

Some suggestions like “Nested Model Pattern” weren’t actually from the book, and some leveraged features from newer versions of Rails like the Enums, which is a good candidate for inclusion in a potential updated edition of the AntiPattern book.

The challenge of extracting modules vs. composition

The AI suggested extracting status-related methods into a module, which led to an interesting discussion about when to use modules versus composition, which is covered in thoughtbot’s more recent book Ruby Science.

Sarah noted that she typically reaches for modules when multiple classes will use the shared behavior, rather than just for code organization.

Copilot provided examples of both approaches:

Module extraction:

module OpportunityStatus
  extend ActiveSupport::Concern

  included do
    validates :status, inclusion: { in: statuses }
  end

  def self.statuses
    %w[active pending spam won lost]
  end

  def active?
    status == 'active'
  end
end

Composition approach:

class OpportunityStatus
  def initialize(opportunity)
    @opportunity = opportunity
  end

  def active?
    @opportunity.status == 'active'
  end
end

We noted that composition often provides better testability and clearer dependencies, and this pattern has become more popular in the Rails community since the book’s publication. We decided to switch Copilot from “ask” mode to “agent” mode so the LLMs could implement this refactoring.

The reality of AI-assisted refactoring in 2025

  1. Performance issues: The LLMs took much longer than expected to run tests and make changes
  2. Context loss: Switching between modes caused the LLMs to lose track of the current state
  3. Over-eager changes: The LLMs started making changes across the entire codebase when only specific files were needed
  4. Test failures: The refactoring broke existing tests, requiring manual intervention

We observed that the process was taking more time than doing the refactoring manually would have taken. This highlights an important consideration: AI tools can be helpful for identifying problems and suggesting solutions, but at the moment the actual implementation may still require human oversight and intervention.

Key takeaways for AI-assisted development

  1. Ask mode is more reliable than agent mode at the moment: For complex refactoring tasks, using AI for analysis and suggestions rather than direct implementation may be more effective
  2. Start small: Large refactoring tasks can overwhelm AI tools and lead to unexpected changes
  3. Human oversight is still essential: AI can identify problems and suggest solutions, but implementation requires careful review
  4. Context matters: AI tools need clear, specific instructions and may struggle with complex, multi-step processes

We love to refactor!

This was a fun session but we agreed it would have been interesting to compare the refactoring performance of the LLMs to that of another thoughtbotter working along side, as it still feels like effective refactoring still requires in-depth understanding of anti-patterns and best practices. Whether you’re working with legacy Rails applications or building something new, get in touch with thoughtbot to explore how we can leverage our years of experience to help you refactor and improve your Rails applications.

About thoughtbot

We've been helping engineering teams deliver exceptional products for over 20 years. Our designers, developers, and product managers work closely with teams to solve your toughest software challenges through collaborative design and development. Learn more about us.