Our interview process at thoughtbot has had many iterations over the years, but the underlying themes have remained consistent:
- Avoid puzzles or challenges; we’re not trying to trick people.
- Avoid large “homework” assignments; we work at a sustainable pace and want to reflect that in our interview process.
- Avoid approach that deviates from how we operate day-to-day with our clients and team; we want to replicate what it’s like to work together as much as possible.
We’re always looking for ways to improve our processes. When something feels “off”, each teammate is encouraged to identify what they’re feeling and create an issue in GitHub to open up discussion to the entire company.
Earlier this year, we set out to address some of the pains we were having when performing technical interviews.
I’ll touch on how we were doing things before, outline the pain points, and discuss our attempts to improve the process.
The previous version of our technical interview contained four portions during the hour-long conversation:
- Warm-up question
- Standard Rails questions
- General technical questions
- Code sample discussion
Our warm-up question intended to “break the ice” and provide an opportunity for the interviewer and the candidate to become more comfortable talking to each other. The question we used, however, was somewhat ambiguous and hard for the interviewee to know if they answered it correctly, because the breadth of possible responses was so large.
This portion was about five minutes.
This portion of our interview was fairly linear and layered in complexity over time. It allowed for candidates to shine, especially for those with familiarity to Ruby and Rails, and was straightforward for teammates running interviews to come away with an understanding of the candidates’ knowledge.
This portion was about 20 minutes.
This portion was about 20 minutes.
With the general technical questions addressed, the interviewer and candidate moved on to discussing a code sample the candidate provided. Sometimes, candidates had little to share due to NDAs or only working on internal projects. Other candidates who were involved in open-source or had side-projects were able to share code.
When candidates shared code, we would ask questions around the reasoning behind a particular decision as a conversation-starter and discuss ideas for refactoring or different approaches.
This portion was about 15 minutes.
There were a number of less-than-ideal areas of the technical interview, and while we were able to bring candidates through the process, there were opportunities for improvement. We started by identifying what felt “off”:
- The single warm-up question was ambiguous and hard for the candidate to feel good about from the get-go. One goal throughout the process is to avoid trick questions or things that aren’t realistic in our day-to-day interactions.
- Deviating from an outline and allowing interviewers to ask arbitrary questions led to variance in responses. It also theoretically opened up the opportunity for bias - either for or against the candidate - by asking questions we knew candidates were more comfortable with in order to elevate the recommendation.
- Asking candidates for code samples, and the resulting conversations, put our candidates in situations where they sometimes felt they had to defend decisions, which was not our intent. It also put the onus on candidates to provide material when they might not have code to be able to do so. Code submitted varied wildly and prevented us from evaluating candidates in a manner that could be compared.
- At a higher level, the entirety of the technical interview was focused on implementation details; however, software development and consulting is a blend of technical know-how and communication skills as we work day-to-day alongside our clients. We weren’t capturing any information on soft-skills or communication.
Given these pain points, there were a handful of changes we made to improve the experience.
Instead of a single warm-up question that received vastly different levels of responses, we introduced two more questions very specific to Rails, but also hopefully easily answered by someone who’s worked with Rails for a handful of months. We honed in on the third question to clarify a couple of aspects in an attempt to reduce variance in responses as well.
With this, it’s easier for candidates to feel very early on like things are going well and get them more comfortable.
This portion still takes about five minutes.
Instead of having a split between a guided interview and an unguided interview, we shifted so the majority of the time spent with the candidate - which now totals about 90 minutes - to be driven by an outlined set of questions. This greatly reduces variance, ensures the interviewer touches on all the areas we’re looking to assess, and vastly levels the playing field across candidates.
Additionally, we interleave questions about approach, process, and communication with more technical implementation questions. This allows us to better gauge how candidates approach communicating with clients, which was overlooked in the previous interview.
By doing so, it limits the possibility of bias introduced in the technical interview process and allows us to assess the abilities of each interviewee evenly.
This portion takes about 75 minutes.
Instead of asking candidates for code samples, we ask for their GitHub username, which is used to generate a per-candidate repository with a single pull request that the interviewer submits. Each candidate reviews the exact same set of changes.
With the pull request in place, we provide context to the interviewer about the history and discussion behind the change, the domain models and their meaning, and context around the hypothetical teammate submitting the pull request. With that knowledge, we ask the candidate to provide both tactical and high-level feedback about the pull request, taking into account the experience level of the person submitting the code for review.
This shifts the focus from us discussing their code - and requiring they have code to discuss in the first place - to them discussing code we’ve submitted. This helps to reduce any amount of bias (either for those who can provide significant samples, or against those who cannot) and allows each candidate to focus on what’s important to them in the code review. Another benefit is that, because each candidate reviews the same set of changes, we can be clear about our expectations and what we’re looking for during the review.
This portion takes about 20 minutes.
With this iteration in place, we’re now rolling this out across all of our Rails interviews. Next, we’ll be introducing a rubric to capture the feedback from this interview, where we assess candidates with varying skill levels.
We’re also looking to introduce blind applications to improve the non-technical interview.
While I wouldn’t consider our technical interview “fixed”, I feel confident it’s in a much-improved state relative to our previous iteration.
Finally, if this sounds like an interview you’d like to participate in, apply to work at thoughtbot!