One of the most important problems in technology is hiring qualified engineers, and yet our industry is terrible at it.
Years from now, we’ll look back at the 2015 developer interview as an anachronism, akin to hiring an orchestra cellist with a personality test and a quiz about music theory rather than a blind audition.
Successful interviewing demands a basket of skills that doesn’t correlate with job performance. The world is full of people who can speak expertly about programming, but can’t effectively code, while the majority of people who can code can’t do it well in an interview. Our hiring process is systematically mispricing candidates—and employers could profit from correcting this problem.
Job interviews are hostile experiences.
For many, I wonder if job interviews might be among the most hostile experiences in all of life. In no other normal experience will a group of people be impaneled to assess—adversarially!—one’s worthiness with respect to their life’s work. Don’t forget: even in the best circumstances, interviews must say “no” more often than “yes”.
I remember interviewing one of our best hires. We’d already done things to mitigate unreliable interviews. They knew that an in-person interview meant we liked him and there was a good chance we’d make an offer. Walking into the conference room to meet him, I saw him physically shaking. You didn’t need to be a psychologist to detect the nervous energy; it radiated visibly, like in an R. Crumb cartoon.
Engineering teams are not infantry squads; members aren’t selected for their ability to perform under unnatural stress. But that’s what most interview processes demand, and often—as is the case with every interview that assesses “confidence”—explicitly so.
Confidence bias selects for candidates who are good at interviewing. There are developers who have the social skills to actively listen to someone else’s technical points, to guide a discussion with questions of their own, or to spot opportunities to redirect a tough question back to familiar territory. Those people build impressive resumes. They effortlessly pass “culture fit” tests. But a lot of them can’t code.
Confidence bias also excludes candidates who don’t interview well. For every genuinely competent and effective developer who can ace a tough interview, many other genuinely competent developers can’t. Our interviews select for the ability to control a conversation. They miss the ability to fix broken network software.
Let’s contain the damage. There are things you can do to make your hiring processes better. I’ve deployed the below tactics, seen them work, and know they should be industry standard. They aren’t yet. Adopt them and profit.
Warm up your candidates
The first experience candidates have with hiring processes is often an adversarial tech-out phone screen.
This creates a pointless handicap. Candidate must start running a gauntlet without knowing what to expect. Not knowing what to expect makes candidates nervous, so they underperform.
My firm instead had the “first-call” system. Every serious applicant got 30-45 minutes of director-level time on the phone before any screening began. We’d chat about the candidate’s background, and then have an exhaustive Q&A about the role. Finally and most importantly, we exhaustively explained our hiring process.
My firm’s work (software security) was difficult and specialized. We assumed that resumes couldn’t predict their performance. So on the first-call, we’d gingerly ask the candidate some technical questions to find out how acquainted they were with our field. Many weren’t, at all.
Those candidates got a study guide, free books, and an open invitation to proceed with the process whenever they were ready. The $80 in books we sent candidates had one of the best ROIs of any investment we made anywhere in the business. Some of our best hires couldn’t have happened without us bringing the candidate up to speed, first.
Build work sample tests
Instead of asking questions about the kind of work they do, have candidates actually do work. But be careful! Unlike a trial period, work sample tests have all three of these characteristics:
- They mirror the actual work a candidate will be called on to perform in their job.
- They’re standardized, so that every candidate faces the same test.
- They generate data and a grade, not a simple pass/fail result.
Some development teams try to accomplish this in a trial period where candidates are paid to fix random bugs in the issue-tracker. This doesn’t work well. The goal is to collect data and use it for apples-apples comparisons. Every candidate must work on the same problems.
Here’s one work sample test we used: we built an electronic trading system and gave it a complicated, custom interface. We built a rudimentary web interface. Then we had candidates find flaws in it.
Candidates needed to program to finish this challenge. They needed insight about how the system worked. They had to be comfortable diving into a piece of technology they’d never seen before. They needed to put all those attributes together and use them to kill a trading system.
Our effort here amounted to a few hundred lines of code, written in a few hours. But it better predicted our hiring success than any interview we’ve ever done.
For the last several years, work samples were my firm’s most important hiring decision factor. We have relied on work samples almost completely, and in doing so, we have multiplied the size of our team and retained every single person we hired.
Here’s what we learned from it:
- Don’t “fast-path” an elite candidate. Making an elite candidate take do a work sample test forces you to make good tests, but more importantly, it collects extremely important data: “here’s what our test says about the candidate we’re sure we want to hire.”
- Collect objective facts. Things like “unit test coverage,” “algorithmic complexity,” and “corner cases handled,” are facts. You won’t necessarily use all the facts you collect, but err on the side of digesting as much data as you can.
- Have a scoring rubric ready before you decide on a candidate. If you’re scoring candidates differently, you’re missing the point. This feels confining and unnatural at first. So, err on the generous side with your scoring, and down-select with interviews. Iterate until your interviews almost feel pointless. We got there!
- Kill tests that don’t predict. You need multiple tests. Keep an eye on which ones you rely on consistently. In one case, we had candidates write a particular tool. That seemed like a great test, because we’d get a blob of code to read. But it turned out that we almost never learned anything from it. Candidates seemed to like writing the code, but we were never in a hurry to read it. We stopped doing that test.
- Prep candidates. Avoid “gotchas.” We let candidates do their work on their own time, from their own home, whenever they wanted to, and we provided tech support.
Standardize and discount interviews
Want to make a team of software professionals hate you? Require them to interview from a script.
When you think about it, making a hiring decision is one of the most empowering things a developer gets to do. It’s the closest they get to controlling the company’s destiny. No surprise then that they get attached to their favorite questions, and to the discretion they’re given by the process.
They’ll get over it.
You need every candidate to get the same interview. You need to collect data. You can’t make that happen if your team improvises when they interview candidates.
My firm designed three face-to-face interviews. Each took the form of an exercise, and each produced a list of facts. We made room at the end of each interview for free-form Q&A, but for most of it, the interviewer recited a script, answered questions as the candidate drew things on a whiteboard, and wrote down results.
Interviewers hated it! But we kept at it, and eventually found that we were becoming more comfortable with why we were making hire/no-hire decisions, once we had facts to look at. We could look at the results of an interview and compare them to the same data generated by successful team members.
More often than not, we found that the untrustworthy source was the interviewer
You should also consider eliminating phone screening. We didn’t, but we did the next best thing: I simply began disregarding all but the most notably bad phone screen results. Candidates could get dropped from our process by being an jerk on a phone screen, but there was little else you could do to fail.
That’s because I asked myself, “When would I be comfortable denying a candidate the opportunity to further demonstrate their ability based on the outcome of a phone call?”
The answer was “Virtually never”.
We all seem to understand the fact that interviews suck, but not its implications. Tech is choc-a-bloc with advice from employers to candidates on how best to navigate scary interviews. That’s all well and good, but it’s the hiring teams that pay the steepest price for poor, biased, and unreliable selection. It can’t be the candidate’s job to handle irrational processes. Unless you’re playing to lose.
So ask yourself some questions about your hiring process.
- Is it consistent? Does every candidate get the same interview?
- Does it correct for hostility and pressure? Does it factor in “confidence” and “passion”? If so, are you sure you want to do that?
- Does it generate data beyond hire/no-hire? How are you collecting and comparing that data? Are you learning both from your successes and failures?
- Does the process make candidates demonstrate the work you do? Imagine your candidate is a “natural” who has never done the work, but has a preternatural aptitude for it. Could you hire that person, or would you miss them because they have the wrong kind of Github profile?
- Are candidates fully prepped? Does the onus for showing the candidate in their best light fall on you, or the candidate?
We all compete in the hottest market for talent we’ve ever seen. The best teams will come up with good answers for these questions, and use those answers to beat the market.