Reducing the Impact of Unconcious Bias in Our Hiring Process

Chad Pytel and Anna Miragliuolo

It’s all too easy for hiring to be impacted by unconscious bias. Studies have shown that factors such as names can greatly influence whether some people even get an interview. Studies have also shown that bias can impact later stages of the interview process, often when subjective factors that aren’t actually important to success in the role are unconsciously, or even intentionally, evaluated.

At thoughtbot we believe that diverse teams build better products. We believe that we’re most fulfilled when have an inclusive environment where we can thrive professionally and personally, and we can bring our whole selves to work.

While pulling together documentation of our Interviewing process, it became apparent that there was an opportunity for our initial screening of candidates to be impacted by unconscious or implicit bias. It was also clear that later stages could be impacted by bias when candidates were evaluated on subjective criteria that weren’t relevant to success in the role.

Over the last year, we’ve worked to reduce the impact of bias in our screening and interviewing. We’ve recently updated our Handbook hiring section to reflect these changes, and I also wanted to share them here.

If you would like some background on bias and its effects on hiring, as well as the scientific studies that have been done around it, here is a decent summary article. There have also been many good books written on this topic. If you’re interested in exploring further, I can recommend Blindspot: Hidden Biases of Good People.

After evaluating our entire hiring process through this lense, we’ve rolled out and iterated on the following improvements.

Anonymous initial screening

Previously people who applied would go into a queue for initial review. This review would look at all of their information. We were already not showing a photo, but this information included their name, educational background, full employment history, their social profiles (like GitHub), their cover letter and resume, and their written responses to our application questions.

We receive over 3000 applications per year, and nearly every one of them was being quickly reviewed in this way by someone on the team.

Clearly this review was ripe to be impacted by biases. So we implemented anonymous initial screening of every application. This process removes names, school information, names of prior employers, gender-identifying pronouns, and other identifying information from candidates before their application is reviewed.

Now, when someone does an anonymous review of an application, they are primarily making a decision based on the redacted answers to their application questions and their prior experience.

Hiding prior evaluations from subsequent interviewers

Before these improvements, at each stage of the interview process the interviewer for that stage would leave comments on the candidate about how the interview went. Before, during, and after your interview you could see the comments left by other interviews.

This allowed a comment by someone, particularly if it’s not actually relevant to success in the role, to influence the next interview.

To protect against this, we’ve changed it so that feedback left by reviewers at each stage of the process is not shared with people later in the process. Interviewers in later stages can be confident that the candidate made it to their stage because they met our objective requirements for the prior stages.

This change gives each candidate a more fair shot at each stage, and it also helps interviewers more objectively interview the candidate in their stage because they aren’t unduly influenced by what someone may have said previously.

Clarified the criteria of each interview stage

In our prior process, we had written guides for each interview that often said what a successful interview would look like, but interviewers left free-form evaluations based on those interviews.

Free-form evaluations make it easier for the interviewer to put more weight on something they might not actually be interviewing at their stage or, even worse, to evaluate something that isn’t at all important for success in the role.

One effective strategy for reducing this effect is to have clear rubrics of what we are looking for in candidates and ask for more objective ratings on each of those criteria. To do this we re-evaluated the expectations and important things that make someone successful for the role, in some cases overhauling the interview format to make sure the interview was actually looking for those things, and extracted the criteria for the new rubrics.

The scale we use for most criteria is one that is natural to thoughtbot, because we use it across several internal processes in addition to hiring, like staffing and our levels for designers and developers:

  • Apprentice: Someone who would not be billable and would need to be on a client project with a Mentor.
  • Practician: Someone who would be billable but not a mentor to apprentices on client projects.
  • Mentor: Someone who is billable and can mentor apprentices on client projects.

These rubrics and ratings help interviewers more objectively evaluate candidates on the criteria we are looking for at their current interview stage.

Compensation Transparency and Equity Review

Compensation should be based on the job you will be doing, not on your ability to negotiate. To ensure this, in the first interview we share the expected salary range for a position. This increases transparency, ensures more equitable pay, and saves everyone time if expectations are off.

Finally, for the less than one percent of candidates who successfully make it all the way through the interview process and are going to receive an offer from us, we want to make sure that the compensation they receive is fair and not impacted by bias. Before sending an offer, all salaries are reviewed for equity and approved by People Operations.

Impact and Iteration

We use Workable as our applicant tracking system, but they don’t have the features we need in order to do all of the above directly within their system, and we were unable to find any off-the-shelf software to help us do all of the above. Fortunately, Workable has a robust API so we’ve created some of our own that sits on top of Workable and is the primary interface we use for anonymous screening and evaluations.

I’m sure that biases, which we all have, can still impact our hiring process.

We have an optional equal employment opportunity survey at the end of our application process to collect anonymous demographic information about candidates. However, participation in the optional survey is low, making it difficult to actually measure the impact of these changes.

I’m proud of the process we’ve made so far in reducing the impact of bias, and we continue to look for areas of further improvement in both measurement and impact.