Humans are consistently and maddeningly given to bias. Decades of earnest work to increase diversity at every level of employment has resulted in only moderate improvements. With the best of intentions, many companies have turned to technologies such as artificial intelligence (AI) to increase both efficiency and fairness in important decisions like who makes parole, who gets mortgage approval and who is interviewed for jobs.

For recruiters, AI and machine learning have promised to increase efficiency, to erase the mistakes humans make, and to remove bias from hiring decisions. Theoretically, machines should do a better job. Theoretically, they are not prejudiced. Theoretically, computer programs should choose candidates with the most merit. But it turns out that this isn’t how machine learning works.

On October 10, 2018, Business Insider reported that Amazon’s internal AI recruiting tool systematically discriminated against women. Although in the article CEO Jeff Bezos asserts that Amazon “values diversity,” their internal hiring tool accidentally favored men. Through investigation, they found that the tool failed because Amazon trained it on data that skewed heavily toward men. Through a complicated algorithm, the resumes of women were underrepresented as best candidates for hiring searches. Although they fixed the obvious things, Amazon ultimately lost faith in and abandoned this AI strategy.

So, the recruiting sphere is abuzz with worry… again. In an article for HR Tech, long-time industry professional Sarah Brennan lists the many innovations in technology for recruiting that were met with caution and slow adoption:  job boards, video interviews and use of social site data. At its base, the recruiting industry moves slowly and cautiously. But should recruiters throw the AI baby out with the biased bathwater? And is that even possible? Today, machine learning and deep learning algorithms underpin so many business tools.

Black-Box Recruiting

For recruiters, machine-learning-based sourcing tools like Google Cloud Talent Solution function as what engineers call “black boxes,” proprietary systems that can be viewed in terms of its inputs and outputs but without any knowledge of internal workings. All of the Internet giants: Google, Amazon and Facebook, make black box choices for us—our search results, what we buy and who we see on our feed. But they don’t explain what priorities they’ve chosen for us. “Trust us,” they seem to say.

With Google, a recruiter searches for “project manager” and millions of results come up. Recruiters who are savvy with Boolean search terms and symbols can further narrow the search. They can specify qualifications like industry, location, level of education, years in each position, among other important criteria. These hacks result in more specific results, but still if 100,000 results come back, a few questions should emerge for the recruiter:

  • Why are these ten, ranked resumes at the top of my search?
  • Is there bias in these results?
  • I will never get through all of them. What am I missing?

With more than two billion users each at Facebook and Google, our affinity for these impressive systems suggests each company does a decent job. Their black box results are plentiful and satisfying. But curious and cautious recruiters should have nagging questions about how searches get their results.

Risks of Black Box Recruiting

Sadly, Amazon’s mistake isn’t the first time a black box tool failed the public. In 2017, ProPublica reported that computer program COMPAS widely used in US courts to evaluate risk in parole recommendations proved biased against African-American prisoners. Once again, the data they relied upon – arrest records, postcodes, social affiliations, income already contained human prejudice.

It’s becoming popular knowledge that although AI can speed things up and deal with huge quantities of data quickly, it can also concentrate prejudice. We shouldn’t be surprised. Type into Google “AI bias” and find 143,000 results! Amazon realized what AI programmers have been shouting for a while: data about people is not clean. Every word contains cultural significance and associations and thus, when entered into machine-learning algorithms, bias is also introduced. By training a system on the current paradigm, you have just introduced prejudice. It should go without saying that while these publicized examples get outsized attention for their potential damage, there must be many more examples that go unseen.

While AI-induced bias is certainly problematic in itself, it also introduces very real legal risks for employers. The US Equal Employment Opportunity Commission (EEOC) mandates that protected categories (including race, age, gender, sexual orientation, and family constitution) are treated equally in employment. In addition, all federal contractors must adhere to the regulations of Office of Federal Contract Compliance Programs (OFCCP) which imposes even more stringent requirements on the documentation of the recruiting and hiring decisions. With black box AI systems it may not even be possible to give users explanations for their recruiting search results. Believe it or not, even the AI programmers may not be fully able to explain the program’s decisions. Obviously, when the priorities of the software are not clear, employers cannot be sure that all regulations are being met.

Transparency is Key

The alternative to a black box AI is “Human Interpretable Machine Learning.” For recruiters, HIML is the best current antidote for avoiding biases found in machine-learning candidate searches. The solution lies in making the computer models where humans can look inside and understand the reasoning for candidate selection and ranking. Better yet are the systems that allow for users to adjust and input their priorities. A well defined partnership between AI and human interface allows AI to do what it does best—cull through large amounts of data, while letting recruiters make important decisions about searching so they can defend why their list of candidates gets consideration and interviews.

Uncommon is one example of an AI system in the recruiting industry that has chosen to embrace HMIL, for many of the reasons mentioned above.  Its program doesn’t use machine learning to select or rank candidates. Instead, it uses machine learning to analyze the qualifications of the candidates based on their resumes, much the way people do. For example, Uncommon’s machine learning models can look at a resume and know whether the candidate’s degree would be considered an “engineering degree” by a hiring manager. It sounds easy, but real degree titles vary widely. In this case, “Computer Science” and “Information Technology” probably would qualify, but “Information Science” and “Music Technology” probably would not. Making sense of these terms across thousands of jobs and millions of candidates is taxing, and nearly impossible to accomplish with simpler tools like keyword search, so AI is essential. But with Uncommon, it’s the recruiters who make the decisions about which qualifications are important for each job, and ultimately which candidates to select.

Each black box’s special sauce is rightly proprietary, but with concerns about bias and the growing need for recruiters to know, it is becoming essential for HR professionals to know what priorities their computer programs are setting. There is a balance between elegant simplicity with little control and cumbersome Boolean searches where recruiters set priorities. That is where Uncommon’s sweet spot is.

“We shall meet in the place where there is no darkness.”
― George Orwell, 1984

About the Author Heather Hughes

Heather Hughes is an educator, researcher, and writer who closely follows the recruiting industry.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s