It’s a new day and you’re ready to review a PR from your teammate. You open GitHub and are confronted with a three-paragraph LLM-generated description that somehow says everything and nothing. You sigh. This review will take twice as long as it should.
The current AI-assisted development landscape feels a bit like a rollercoaster ride. Yes, we can generate code magically on a whim, but at what cost? Somewhere between the productivity promise and reality of using these tools we’re creating new kinds of technical debt. As we’ve started to integrate LLMs into our workflows at thoughtbot, we’ve identified three anti-patterns that consistently frustrate teams.
Magic Bandaid
It all started when you pasted a screenshot of the initial error and thoughtfully crafted prompt “fix this bug”. Now you’re on your fifth round with ChatGPT, each solution more convoluted than the last. You understand pieces of the code, maybe the majority, but there are knowledge gaps that you know you should understand - but ugh, that takes time, and you need to SHIP. The code works, but feels wrong.
This is what happens when we treat LLMs like oracles instead of tools.
The solution: Who needs humans? Well, turns out they still come in handy. Sometimes the best path forward is a 5 minute conversation with a coworker, or writing one sentence describing the problem you’re trying to solve. If you can’t describe it - you’re not ready to use AI.
Review Time Sink
Beyond walls of text in PR descriptions, there’s the code itself. Code is a language, and like any language, it carries subtle signals. AI-generated code has a distinct smell. The human reviewing your PR should not need to deal with these - they should not be the first set of eyes on your work. We’re all responsible for the code we commit, no matter the source. If you can’t reverse engineer things, why should your code reviewer put in the time? “Because ChatGPT chose to do it that way” isn’t going to lead to a great discussion - it will lead to a weaker dev team.
The solution: Don’t omit the planning step, start with your own ideas, then compare that with what an LLM recommends. These are great tools for planning and exploring the solution space. Hold a discussion with your team on how you want to utilize LLMs together. Before tagging anyone on a PR review it yourself. PR descriptions need to be proofread and concise - consider the human on the other end (that may be you next time around).
Context fragmentation
Every ill-conceived AI-generated artifact breaks down your product. Those auto-generated tests that seem fine? What about the commit messages ChatGPT wrote? They might be technically accurate, but do they include the human reasoning on the why?
This fragmentation compounds. You ship code fast, tests pass, documentation exists. So why are things becoming harder to maintain? AI is taking the lead. The tooling is starting to control the narrative, don’t let it. Six months later you may not be able to explain your own architecture decisions because you never really made them - you just clicked “accept edits”.
The solution: Stay in the driver’s seat. Add human context along the way - the code, the commits, the comments, the PR descriptions. Include snippets on what you tried that didn’t work, and why you chose one approach over another.
These anti-patterns are worth keeping in mind as you integrate AI tools into your development workflow. Your teammates and future you will appreciate the humanity.