I recently had coffee with a young professional who wanted advice about how to get a job in policy. Every morning, for months, he had been monitoring Capitol Hill’s version of the classified ad board for job opportunities, the Congressional placement office’s jobs bulletins. He had submitted dozens of applications, applying to jobs where he thought his skills closely matched or surpassed the requisite qualifications, only to receive no reply. “At times, I felt so frustrated that I wanted to give up the search altogether,” he told me.
Over the past few years, I have been studying the broader impact of algorithms on society, first as an academic in computer science, and later in my work as an advisor to the Obama White House. In considering my mentee’s situation, I could not help but think that there should have been a way to use modern technology to connect him with the opportunities he wanted.
Enter, artificial intelligence.
Outside of Capitol Hill job boards, AI has already made the job search process more efficient in several key ways. For many years, LinkedIn has collected a vast store of some of the world’s most informative data, both on individual candidates and the jobs to which they might want to apply. It has used this rich data to proactively recommend job postings to job seekers. Similarly, Google recently announced an ambitious new product called Google for Jobs that will, in its initial form, scour the internet to gather data related to job postings and apply a machine learning system to present them to job hunters.
With a vast population of people looking for their next opportunity and a similarly eager bevy of employers desperate to hire them, I expect that this trend in recruiting algorithms will continue, and that artificial intelligence will become an increasingly important factor in how workers find jobs and employers find job candidates. For instance, one could imagine that Google might, over time, craft its service into a robust engine that uses AI to recommend jobs posted to sites like Monster or CareerBuilder to the most fitting and qualified candidates. Reciprocally, employers could use the service to receive a fire hose of resumes of potentially deserving candidates. A myriad of startups also exist to solve yet more hiring problems through AI, whether they plan to displace headhunters or communicate automatically with candidates.
The injection of AI into the recruiting industry is exciting for job seekers and firms alike. But it is precisely at this time, when many new players are newly exploring the tremendous opportunities that are at hand, that engineers and policymakers must be doubly cautious to assure ethical standards are adhered to in the development of new hiring technologies powered by AI.
The starkest and most concerning issue is algorithmic discrimination, which can unwittingly be propagated through AI, particularly if its designers are not careful in how they select input data and how they craft the underlying algorithms.
We already know that humans can make biased decisions in hiring contexts. In one widely-cited NBER experiment, recruiters reviewed resumes that featured both “white sounding” and “black sounding” names. Even though both groups were similarly credentialed on paper, the recruiters more often selected the former group. Imagine a situation in which such a recruiting policy is coded into a decision-making algorithm. This situation is entirely feasible, particularly if the policy leads to profitable results for the employing client despite its implicit bias.
The recruiting industry is replete with arbitrary measures of competence and qualifications, each of which can perpetuate bias in its own right. Enterprise, the car rental company, for example, uses tools provided by software firm iCIMS to check whether candidates meet minimum requirements including a bachelor’s degree and some form of leadership experience. Such bright-line conditions aided by software can limit opportunities for deserving people who do not satisfy them but would otherwise perform the work well, potentially perpetuating bias.
HireVue, a company that develops recruiting tools used by such multinationals as Unilever, uses video interviewing to screen candidates for its clients by applying a combination of facial analysis and AI that measures and examines such behaviors as word choice, gestures, and voice inflections. If the engineers developing these tools commit certain kinds of errors or oversights in gathering their input data or building machine learning systems, unintentional bias could be the direct result. Technicians could, for instance, introduce selection bias in gathering input data, or unnecessarily narrow options for certain people, thus leaving them short of the economic opportunities afforded to others.
“There could be information that candidates do not want to share with prospective employers that is nevertheless collected and analyzed by commercial recruiting technology to enable sophisticated candidate screening,” says Pulin Sanghvi, who until recently was the head of career services at Princeton University. “Candidates might feel disempowered, particularly if they find themselves continually being screened out, despite their qualifications.”
Algorithmic bias could also show up in subtler ways. “Consider a history major interested in starting a career in marketing, who enters her qualifications into some matching technology,” says Jeanine Dames, head of the Yale University Office of Career Strategy and an associate dean in Yale College. “What should the student indicate are her preferences so she receives the kinds of recommendations she wants? What if the algorithm tells her she should pursue a doctorate or work at a museum, instead of directing her toward the marketing sector? Reciprocally, what if CS majors prefer a non-technical role as their first job, but the algorithm is trained to match them with opportunities that proactively use their software development skills?” Dames goes on to note the great harms that could come from this. “What happens if these students—who often are at a formative stage of their career—start thinking the algorithms know what they should do better than the student herself, or her mentors who personally know her and have provided years of valuable advice and guidance?”
It might be the case that corporate algorithms are indeed free of any sort of bias. The likelier scenario, though, is the more complicated one: it is remarkably hard for researchers to detect algorithmic bias that might occur in commercial algorithms, because the private sector is not incentivized to disclose how they are developed. There are many good reasons for companies to keep algorithms private, but at the end of the day, researchers often lack investigatory tools to analyze whether private algorithms are fair.
During my time in the Obama White House, we partnered with leading civil rights groups and advocacy organizations to take several important steps in the way forward to address the latent harms of algorithmic discrimination. Among them were a seminal report on the impact of big data on consumer privacy, and a follow-up that looked more closely at algorithms’ capacity to discriminate. But government actions can only take society so far, particularly at a time when the development of AI and its implementation in recruiting is so nascent.
Encouraging developments have emerged, particularly from the technical community. Earlier this year, some of the world’s leading researchers of algorithmic bias convened in Halifax to discuss technologies like algorithmic transparency and algorithmic accountability that can counter the possibility of discrimination. Government, industry and the broader community must do more to encourage this kind of work, whether in exploratory stages or advanced implementations.
When AI and recruiting come together thoughtfully and ethically, they can encourage better candidate fits, promote fairer interview screening, and increase overall efficiency.
But we must also be mindful of the specter of harms like algorithmic discrimination and implicit harmful bias in AI-enabled recruiting, and do our best to counter them. Nothing less is fair to the people whose livelihoods are at stake.
Dipayan Ghosh is a fellow at New America and research affiliate at Harvard University.