LLMs can read text: Overlooked AI-assisted coding techniques

Fritz Meissner

Most advice about coding tools and AI focuses on the narrow use of code generation within one source code file. However, as “machines that are good at text”, LLMs can do a lot more.

Here are some other “text in software development” uses for LLMs:

  • SQL query planner output (for example asking: which node contributes the most?)
  • API responses (how is response A different from response B?)
  • XML or YML configuration (where is the invalid line?)
  • schema files (why does my JSON object not pass validation according to this schema?)
  • log files (see below)

I’ll go through a useful example in more detail later, but first let’s consider more generally why you’d want to analyse text with an LLM.

LLMs: middle-ground between humans and traditional software

A human can adapt between many forms of input and output and take on many different tasks, but is bad at repetitive tasks that require constant alertness. Think of searching for the one-line difference between two long HTML snippets: the eyes glaze over after just a few lines. Faster, if it isn’t formatted neatly.

Traditional software is very picky about the input given, and only delivers output in a narrow range of formats that were considered by its creator. On the other hand, it is very precise and very reliable. It doesn’t matter whether you’re looking for something in line 25 or line 105, if you can work within exacting constraints, there won’t be a mistake.

Current LLMs are a mix of these. They can be very adaptable with input and output, and attempt a variety of tasks. They won’t become tired like a human. They aren’t (yet) up to gigabytes of text like traditional software, they aren’t quite as adaptable as us (yet), and they’re (currently) prone to laughable errors. Those are some big caveats, but we can work with them.

A general approach

If I want to pass text to an LLM and get something useful back, I’ll use these steps:

  1. Prepare my input data
  2. Give a declarative query
  3. Eliminate mistakes with follow-up input
  4. Iterate on the format of the answer

Next I’ll look at a specific case, analysing SQL queries in a Rails log file.

Counting and grouping SQL queries in a local log file

Recently I worked on a Rails ActiveRecord performance problem. I knew that hundreds of very similar SQL queries were being generated for a single API request. I wanted to reduce the number of queries generated by identifying patterns and counting how many queries matched each. If pattern A occurs 5 times and pattern B occurs 45 times, I’d reduce the number of queries fastest by investigating pattern B first.

Human brains are not good at grouping and counting. I could write a script or use a spreadsheet, but I knew that I’d get to my answer fastest with an LLM.

Prepare my input data (narrowing and filtering out sensitive data)

I have local log files of the Rails request I’m analysing. Those are just plain text - good - but they include a lot of information that is unrelated to my specific interest. I wanted to narrow it down for two reasons:

  1. LLMs can handle limited context, and cloud providers charge based on input size
  2. (more importantly for my case) LLMs are prone to being confused by unrelated text

I first found the text between the start and end of the relevant Rails request. Then I used grep to find only lines related to queries:

> grep SELECT log/development.log
ActiveRecord::SchemaMigration Load (2.3ms)  SELECT "schema_migrations"."version" FROM ...
SELECT "flipper_features"."key" AS feature_key, "flipper_gates"."key" ...
Role Load (0.7ms)  SELECT "roles".* FROM "roles" ...
  Permission Exists? (0.7ms)  SELECT 1 AS one FROM ...
  ... lots more lines of queries

I could have done this step with an LLM too (and would have if grep wasn’t so easy). If I did that I’d employ a similar (prepare input, declarative query, eliminate mistakes, iterate on format) loop in a separate session to get this text. Then I’d start a new session with my LLM for the next step.

While preparing input data I also filtered out sensitive data that I didn’t want going into a cloud-based LLM where I’m not in full control.

Give a declarative query

Here’s the query I wrote:

> The following queries were generated by a single Rails request. Please group similar queries and provide me with a count of each group:

> ... pasted queries here ...

It might be superstition, but I find I get better results when I give more context (these are queries, related to a single Rails request) and ask for a result without specifying how. I’m not telling it anything about how lines with differences in parameters should still count as the same query, let alone details of SQL syntax.

The reponse is nicely formatted:

# Table Query Count
1 schema_migrations 1
2 flipper_features 1
3 users 1
4 roles 1

Unfortunately it is absurdly incorrect, but that doesn’t mean I haven’t made progress.

Eliminate mistakes with follow-up input

I knew there were dozens of very similar looking queries, not a grand total of four. This is important: I know what a completely unrealistic answer looks like, and how to correct or ignore it.

Important: I know what a completely unrealistic answer looks like, and how to correct or ignore it.

Being able to prove that one answer is wrong is still much easier than finding the right answer myself. Instead of giving up, I offered a correction:

there are 154 select queries in that list, but your counts do not add up to that total

This time, the response came back correct:

# query_type count
13 MyTable Exists 49
14 MyTableItem Load 24
15 MyTableStatus Load 2

Iterate on the format of the answer

The output above (a github-flavoured markdown table) was something I asked an LLM to generate specifically for this blog post, weeks after I was working on this problem.

Earlier I received spreadsheet-type output with cells and columns that was native to this LLM’s chat UI. At one stage I also received text output like:

1.  User-related queries: These queries deal with fetching or managing data related to users.
    • User Load
    • Count: 2
2.  Role and Permission-related queries: These queries are concerned with roles and permissions.
    • Role Load
    • Permission Exists?
    • Count: 4

Consider what you’ll do next with the data and ask for what you need from the LLM.

Why an LLM was the best tool for the job

There are tools that could provide me with the same insight, without the non-deterministic LLM responses. Each has drawbacks of their own.

Database- or Rails-specific monitoring tools require investment to set up that should be repaid over the lifetime of an application and not introduced for one unique problem. There are also well-regarded tools in this class that could not help me with this specific case.

Alternatively I could use text-processing utilities to work directly with the text from my log files. These are easily installed, but require different expertise. Consider this command:

grep SELECT log/development.log | sed 's/\[[^]]*\]//g' | sort | uniq -c

This looks like a plausible approach to my problem (if you know sed, sort and uniq), but it miscounts. Debugging it might be easy to an expert GNU user, but I know I’m better off sticking to natural language.

For all the alternative tools I considered, the setup or expertise required and lack of flexibility in output did not match what I could get from an LLM. In addition, LLM expertise helps me with this and many other problems, including those I listed at the top of this post. The alternatives require learning one tool (or command) per problem.

Conclusion

As a human, I’m not that good at dealing with more than a page of text, and I need a lot of help from nice formatting. Most of my software is excellent at dealing with lots of data (e.g. gigabytes of log files), but very prescriptive in how it can help. LLMs provide a nice middle ground and offer many uses for software development.