“If you’re so smart, why ain’t you rich?”
— an ancient retort
What’s a better way to hire? I don’t know. What I see are design bugs standing in the way of finding out. Actually getting good at something requires practice, measurement, and a chance to learn from failure. Practice alone doesn’t work if there’s no way to tell when you’re doing it wrong. Measurement without learning from negative feedback is worse than useless because it only reconfirms whatever you’re doing. In the current regime the interview process is, by definition, always right. It gets stuck on a local maxima, with little ability to improve.
Most of my career has been focused on performance work, making systems more efficient and accurate. People tend to think that means writing fast code, but it’s mostly just careful measurement and actual empiricism, i.e. writing down your assumptions and predictions beforehand, making decisions in the open, and being ready to admit when you are wrong.
People are not computers, but hiring is (or should be) a kind of filter that constantly learns from both positive and negative feedback.
For instance, you know how many people passed the interview bar but later had to be fired. That’s your “false positive” rate. Startups obsess over that numeric boogeyman. But what about people you passed on who ended up doing well? Why not track the false negatives too? Surely you can learn something from them.
Anyone who’s actually founded a startup is probably rolling on the floor. No one has time for that! There’s funding to land, new candidates to source, offices to rent, equipment to buy, media to socialize, investors to tickle, conferences to attend, code to write, vendors to evaluate, food to serve, blogs to post, customers to chase, and 100 other things they’ve got no idea how to even plan for.
You don’t start a company to be nice. Every startup is like a raft going over a waterfall. You had better make sure the crew gets along and can make decisions quickly. If not, they’re dead. They are under a level of stress that those who haven’t done it may not fully appreciate.
Venture capitalists are the ones handing out the rafts. All these crazy bastards are standing in line asking them for one. The raft-keeper has a few minutes to decide whether they are the right type of crazy, resembling the ones who made it out the other side. He gives what advice he can: be extremely conservative in the choice of raft-mates. They are already taking so much risk. Team failure is a leading cause of death.
False positives are more obviously damaging to a small company due to the time and effort they can soak up, and they are straightforward to measure. Sure, it doesn’t scale. The early days are all about doing things that don’t scale. If we’re going to change The Culture we have to recognize that it’s not obviously broken to founders under the gun. It seems to work. Some day, sure, maybe the company needs to gear-shift to a more grown-up process.
The startups that make it past the waterfall are usually the ones who took investor advice to extremes. After the waterfall is a big long river, and then an ocean. They set sail with a badly flawed worldview.
There’s nothing quite as permanent as a temporary fix. Some day never comes. The people who join later see only the process as it exists, and doubts are countered with a frantic work pace and the conviction that this time, it’s different. It’s the investor’s advice reflected in a funhouse mirror. Because the initial assumptions were never reviewed, fear of the False Positive Monster looms over them long after one bad hire could possibly be an existential threat. There’s a built-in aversion to both risk and being wrong. The more sensitive ones are aware there’s a problem somewhere but feel powerless against the inertia created by survivor bias.
“We did not work hard enough to understand our people as an ecosystem, a cycle that could be virtuous or broken… We didn’t take developing people seriously because we weren’t developed; we were an ‘entrepreneurial culture’…
I am glad some people I hired were very happy and successful. I feel extremely guilty that it was pure luck.”
— former startup hiring manager
False positives are more obviously damaging, but not necessarily worse than false negatives. The danger of institutionalizing bias is that it launches your personnel department with great force in the wrong direction. You optimize what you measure. If you’re not learning, that part of the company is essentially dead, no matter how much lip service you pay to the importance of people.
When pressed, many investors and founders would agree with that line of reasoning. But they still plead special circumstances for the early stages. One VC wrote to me about the lecture he gives to companies after they ship their first product:
We were right to focus on a homogeneous team early on. Ideally people who had worked together before. The risks in the early stages of a company are getting the product out and having it meet the market. That required tons of creativity and lots of leaps of faith. We wanted people who trusted each other and had each other’s backs. But the product is out. Now it’s less about the team and more about the company and the product and the customers and the market. We have a company that will operate globally and we’ll have people on duty, somewhere, 24 hours a day and 365 days of the year. In the beginning we relied on monomaniacal focus. Now we need to turn to adaptability.
This is an explicit, unapologetic preference for homogeneity, a kind of anti-diversity. VCs assert that this is required in the early life of a company. Their core argument is the logic of survival: without extreme anti-diversity there would be no successful company to criticize. Survival justifies everything: the sexism, the ageism, the callous HR practices, the disregard for the law, and the discrimination against anyone who doesn’t look like the founders—who just happen to look like the VCs themselves. All of that and more is acceptable collateral damage.
This is a coherent argument. You don’t have to like it, but there’s no obvious logical flaw. So now it’s time to take a deep breath, let it out, and seriously ask the question that intellectual honesty requires us to ask: could it be true?
What’s good for the goose…
I know of no study on the diversity of early startups that can answer this question. Representative data would be pretty hard to gather. And it’s not as though you can do repeated experiments. Every startup is a freak accident.
However, there is a group of people that do repeated startup trials and for whom we do have solid diversity data, painstakingly gathered by Harvard Business School: venture capitalists. The evidence suggests that anti-diversity in investors is a gigantic disadvantage:
“The probability of success decreased by 17% if two co-investors had previously worked at the same company—even if they hadn’t worked there at the same time. In cases where investors had attended the same undergraduate school, the success rate dropped by 19%. And, overall, investors who were members of the same ethnic minority were 20% less successful than investors with different ethnic backgrounds.
…the lack of success among similar investors seemed to lie in the decisions that followed the investment.
In addition to granting cash, venture capitalists are heavily involved in hiring or firing the CEO of the portfolio company, choosing a board of directors, devising an overall strategy, identifying potential partners, and so on. Indeed, the researchers found that the negative affinity effect was strongest in early-stage deals, which generally require more input from investors than do later-stage deals.”
Yes, you read it right. Homogenous cliques of investors provide worse returns (and worse advice) than diverse groups, especially in the early stages. As in the exact opposite of the advice they give. As in double-digit worse by the only metric that matters to them. Confronted with this evidence, any VC with a clue should be wondering whether their entire investment thesis is founded on horseshit. At the very least they had better start looking for a different excuse for the behavior they promote.
Presuming to solve a fundamental industry problem with a blog post sounds foolish, and it probably is. But the history of spam filtering provides an interesting example to follow.
There was a period of time when spam genuinely threatened to make the global email system useless. It was a disease against which there was no effective defense. The hours wasted cleaning it up was matched by the number of harebrained proposals to eliminate it, from email taxes to completely new protocols to bounties placed on spammer’s heads. The problem looked impossible. It was just the way things were. The discussions became so repetitive that a sarcastic form letter was created to save time. It listed dozens of phrases the responder could choose from to explain why the author’s ideas were tired and inane:
Congratulations! Your post advocates a (a) technical (b) legislative (c) market-based (d) vigilante approach to fighting spam. Your idea will not work. Here is why it won’t work…
Nothing as entrenched as the funding and startup network will change overnight. The problem looks impossible. But we can do better, and we should try. We already do better than the norm in areas like gay rights. I believe that we can make our little patch of the world a smarter, humbler, more inclusive, and more productive place.
To the surprise of nearly everyone, the method that finally cracked the problem of spam actually was first proposed in a blog post. Bayesian filtering worked because it had no dependencies on the larger system and it improved over time with two complementary feedback loops, one positive and one negative. It was something an individual could implement for their own benefit. Later elaborations followed: email providers sharing spam models across accounts, post-hoc filtering, conflation, and so on. Starting from a single nugget, the idea of layers of self-adjusting feedback loops grew until we created a spam-filtering infrastructure which is pretty good at eliminating both false positives and negatives.
So what can you do as an individual or company to improve the people filter? Remove sources of error and forgetfulness and create more feedback loops.
Avoid hindsight bias by registering your predictions ahead of time. While you read a résumé, jot down what qualities you think the candidate has. Write down right then if you feel they will be a hire or not, and what would cause you to change your mind either way. This is a great way to prep for the things you want to test in the interview, and forces you to be explicit about the qualities you care about.
Speaking of which, read the fucking résumé. Like, for more than two minutes. I can’t believe I have to point that out, but here we are.
During the interview, operate from the principle of hospitality. These are competent professionals taking time out of their day to do free make-work for your entertainment. Conducting an interview is like being a talk-show host. Your job is to keep them relaxed and on subject, and allow them to show their smarts. Acting like an irritable dungeon master is not just childish, it’s a waste of everybody’s time. Kobayashi Maru only works reliably in science fiction.
Practice listening to other modes of thinking. There are few things more frustrating than watching a visual thinker and a verbal thinker talk right past each other and come to the mutual conclusion that the other person is an idiot.
Practice listening to other accents. So many smart people face a barrier communicating complex ideas outside their native language. I used to have a bad case of Gringo Ear (and still do, when I’m tired). I initially wrote off a former colleague because he got his points across in a long, halting way. Another colleague disagreed. “English is his third-best language, and you aren’t making enough effort hearing him. He and I communicate just fine. In French.” Before you get all indignant, try it. It’s the most efficient way to increase the number of smart people you can talk to in life, and what the hell is wrong with that? Foreign movies are a good way to train up.
If that doesn’t work, try communicating in writing. I was surprised (and embarrassed) to discover that that same person writes beautiful, coherent prose.
Single-blind as much as possible. Ask your friendly neighborhood recruiter to remove the names and other identifying info from résumés before you read them. I’ve seen interviewers thrown off their stride because they assumed the candidate’s gender and were wrong. It’s a terrible source of bias.It’s hard to overstate how badly people are biased by names. And of course, don’t ask how a candidate did on other interviews before writing down your own opinion.
More generally, confirmation bias and stereotype threat are real. If you expect women to be less technical or intelligent, that’s what you’ll see. One of my most embarassing moments was at a conference with a couple dozen colleagues from Facebook. I had been chatting with one of them for a few minutes about recruiting when I asked her what part of the recruiting department she worked in. “Oh, no,” she said. “I’m an engineer. I work on [important software that I used daily]. I’ve been at Facebook about six years.” 98% of people have strong, unconcious biases. The other 2% are lying.
Study false negatives, the people you turn down. Not just the outliers, like the guy who was rejected by Facebook, then started his own company which was acquired by Facebook for billions of dollars. Also track the gal who was rejected for “culture fit” and went on to a great career. You have the technology to keep tabs on candidates; they are the same tools you use to find them in the first place. A company that doesn’t actively learn from the ones who get away has no right to whine about how hard it is to find good people.
Another good place to add feedback loops is in the conduct of the interviewers. Doing it well is a hard skill. Pair an experienced interviewer with a trainee and have them trade off watching and learning.
Enforce minimum standards of feedback. The feedback should be written immediately afterward and before looking at any other feedback. Every interviewer report should have a précis of the questions asked and answered, a verdict, and specific things that would cause them to change their verdict. For example: “I don’t think Fulana de Tal is experienced enough, but if she does well on the other coding interviews I’ll vote yes.” If these minimum standards aren’t met, then the feedback should be excluded from the final vote and the interviewer told to step up their game. Remember: practice, measurement, and a chance to learn from failure.
There’s a lot more to learn about creating an industry we can be proud of. First we have to learn how to learn.