Using a design sprint to find focus for an AI solution

Alex Chen and Madeline Jimenez

As the AI/ML industry continues to grow, thoughtbot has turned its gaze internally to evaluate our own processes. In particular, we recently came to scrutinize the product design sprint and how it may evolve with the new tools available to us. For two weeks a cross-functional team of thoughtbotters ran an experimental sprint for incorporating AI/ML into a thoughtbot workflow—namely our internal scheduling tool, Hub. Our goal was two-fold: 1) to learn more about the opportunities and use cases for AI/ML 2) to review whether the design sprint process still holds well within the context of an AI/ML product request.

Traditional Problem Solving

In order to understand the nuances of a product design sprint for an AI solution, let’s take a look at how a design sprint works for traditional solutions. We employ a diverse team of stakeholders, including designers, developers, and key decision-makers that enter into a structured process, typically a five-day framework. Using various group and solo activities, the team will generate a large volume of information about the issue at hand, coalesce the information into a focused problem area, ideate on product solutions for that area, and then refocus onto a combined product solution to be tested through prototyping. Frequent and consistent communication and collaboration within the team will ensure alignment and allow for informed decisions throughout the sprint.

Problem Solving with AI

In a product design sprint where artificial intelligence (AI) and machine learning (ML) are potential implementation options, the process may involve additional considerations. Initially, the team should assess whether AI/ML is a suitable solution for the identified problem or if a more traditional approach is better. The sprint may include activities such as data exploration and analysis to understand the feasibility of leveraging AI/ML, defining the data requirements and determining the potential impact on user experience. Prototyping and testing phases may also involve validating the AI/ML components, ensuring they align with user needs, and refining the models based on user feedback. Integrating AI/ML into the product design sprint requires a specialized skill set and may involve a more iterative and experimental approach to optimize and fine-tune the intelligent components.

Learnings from a thoughtbot AI/ML Design Sprint

A diagram of the steps in a standard design sprint compared to the steps we took in the AI/ML design sprint.

We began our sprint as usual and had overall success following the design sprint guidelines and exercises with the exception of rearranging the usual order of operations a few times. thoughtbot CTO, Joe Ferris, acted as our main expert as both a user and AI/ML developer. With the valuable insights we gained from our conversations with Joe, we were able to create How Might We’s and focus on Scheduling/Staffing in Hub as the central area that we would focus on.

Having found the focus of our efforts, we split and began looking at examples of AI products that inspired us. However, once we reached the point where we would normally diverge in the sprint, we found that we were not comfortable ideating on UX/UI because we weren’t yet sure what user inputs the AI would require, or exactly what it would return. It seemed like a better use of our time to do some “AI sketching” - so we started on a technical spike.

We decided to see if we could get a model like ChatGPT to return useful results within the context we were looking for. ChatGPT was also very useful for generating dummy data to be fed cyclically back into itself. Once we had some useful prompts and results from ChatGPT, we focused on finding other resources that could be useful for our central idea of scheduling. In particular, Relevance AI is a Zapier-like software which gives a low-code way to create automated AI tools. It will reference uploaded data—ranging from CSV’s to full Figma files—and create custom prompts to ask the LLM.

Example Relevance AI Prompt

CONTEXT: """{{knowledge}}"""
Based on {{user-input-project-description}}, assign an appropriate number of team members from {{knowledge.upcoming-availability-dummy}} to this project.

Example Result

To assign an appropriate number of team members from the given list to the project, we need to consider the following criteria:

  • The team members should have skills in Ruby/Rails development, as it is required for the project.
  • The team members should be available starting from 11/20/23 for a duration of 4 weeks.

Based on these criteria, we can select the following team members:

  • Noor Landry: Available starting from 12/4, Ruby/Rails skills, and not available to travel. Although Noor is not available from the required starting date, we can consider adding them to the team if the project timeline allows for a later start date.
  • Aiden Bloggs: Available starting from 11/27, Ruby/Rails and Docker skills, and not available to travel. Aiden meets the required starting date and has the necessary skills, so they can be assigned to the project.
  • Trystan Craig: Available starting from 12/4, Ruby/Rails, React Web, Docker, React Native, Hotwire, and GraphQL skills, and not available to travel. Trystan meets the required starting date and has a wide range of relevant skills, making them a suitable addition to the team.

Therefore, we can assign Aiden Bloggs and Trystan Craig from the given list to the project as Rails developers for the required duration starting from 11/20/23.

Challenges & Risks

  • Problem statement process — we started with a solution (AI) and were looking for a problem to solve using it.
  • We hit many paywalls during the research/diverge phase
    • Notion only allows 20 polls to their LLM before hitting paywall
    • ChatGPT requires a premium plan in order to upload or reference an external document
  • AI doesn’t work the same way that software does. It’s not conditional in the same way. It was a challenge to rewire the way we thought about the information that users would need to provide.
  • There’s a bit of a danger with how quickly a solution looks “good” but is in actuality unhelpful.
  • AI might be faster to market with a new solution rather than writing custom software
    • AI might be justified strictly on speed (even if we could solve the problem with code)
  • Cost of maintenance and usage of consistently querying AI with custom prompts per project is still hard to estimate

In conclusion

AI and LLM’s are rapidly gaining traction and interest. Many clients may come to us with an abstract desire to incorporate AI into their products without concrete direction for how that inclusion will benefit their end users. The challenges and risks that we uncovered in this particular design sprint mirrors the kinds of issues that we encounter in those situations.

We chose to deviate from the traditional design sprint in this instance because of the complexity of the field and the nearly unlimited potential for these new technologies. The intermediate technical spike on feasibility may become standard in AI sprints in order to gain collective understanding on the latest capabilities of LLM tools. However, we believe that the spirit of design sprints remains the same.

In ambiguous situations where solutions are creeping ahead of the research and understanding, it is our duty as consultants and strategists to take a step back, assess our assumptions, and continue to run products design sprints methodically. AI/ML should be seen as one potential tool to serve users, not necessarily the right one.