As employers seek to make increasingly efficient and “better” hiring decisions, avoid biases, and increase workforce diversity, they are turning to, or considering, a growing range of technological tools. Essentially, these tools help employers efficiently identify qualified candidates, narrow the pool of job seekers, and predict who may be the “best” hire. As is often the case, however, technological advances may outpace our ability to keep up, and our ability to predict and understand the legal implications of such technological advances. In particular, such tools may be facially neutral but have an adverse impact on particular groups in violation of Title VII of the Civil Rights Act, Executive Order 11246, as well as other federal and state anti-discrimination laws.

In a recent article, Upturn – a non-profit organization that “promotes equity and justice in the design, governance, and use of digital technology” – evaluates the full range of “predictive hiring tools” with a view towards exploring “how predictive tools affect equity throughout the entire hiring process.”[1] The Upturn article ominously concludes that, “without active measures to mitigate them, bias will arise in predictive hiring tools by default.”

As with any type of “test” or selection tool, we agree employers should proactively understand how such tests and tools work, and what unintentional consequences they may produce.

Upturn points out that vendors have developed predictive tools for every stage of the hiring process: from sourcing to screening to interviewing to selection, and beyond hiring to performance evaluations. However, to appreciate the associated risks, Upturn points out employers should first understand how biases might be inherent in these tools – even those tools that vendors tout as removing bias from the hiring process.

When we think of bias in the employment context, we naturally think of “interpersonal bias” – an explicit or implicit (cognitive) bias an individual may have for/against characteristics of other individuals, including race, gender, color and other legally protected, immutable characteristics. In summary, however, even where predictive tools take steps to eliminate interpersonal bias, predictive tools may unknowingly create, perpetuate or exacerbate existing institutional and/or systemic biases. As the Upturn authors suggest,

Without active measures to mitigate them, biases will arise in predictive hiring tools by default. But predictive tools could also be turned in the other direction, offering employers the opportunity to look inward and adjust their own past behavior and assumptions. This insight could also help inform data and design choices for digital hiring tools that ensure they promote diversity and equity goals, rather than detract from them. Armed with a deeper understanding of the forces that may have shaped prior hiring decisions, new technologies, coupled with affirmative techniques to break entrenched patterns, could make employers more effective allies in promoting equity at scale.

While the technology may be new, efforts to address adverse impact in employment selection decisions is not. Title VII deems any employment selection procedure that has an adverse impact on the hiring of any race, sex, or ethnic group to be illegally discriminatory, unless the procedure has been “validated” in accordance with the Uniform Guidelines on Employment Procedures (UGESP), and no less-impactful procedure exists as an alternative. In general, a selection procedure or test is valid if it tests for job seeker attributes that are highly correlated with success in a particular job. The term “selection procedure” includes any procedure used to narrow the pool of job seekers for hire – from eligibility questions, to resume screens to interviews. Thus, most predictive hiring tools are subject to the UGESP.

However, the Upturn article points out that validation of predictive hiring tools may not be enough to identify tools that in fact fall prey to institutional and systemic biases. Employers and vendors may also need a more in-depth understanding and analysis of a tool, and how it works for any given employer.

While employees, plaintiff attorneys and the EEOC may not, yet, be fully active in this arena, they are aware of the issues. EEOC began publically addressing the implications of this issue in 2016. Moreover,   as discussed in an earlier blog post, EEOC recently launched an Office of Enterprise Data and Analytics with the capability of providing analytical support for systemic discrimination investigations.

Of course, regardless of legal implications, a seemingly effective hiring tool that is, without our knowledge, creating or perpetuating unintentional biases in your company’s hiring process may prove to be counter-productive.

What steps should employers take now to protect themselves from legal risk? The Upturn article provides a set of specific, technical “Guiding Questions” the authors found themselves “needing to answer…before we could even begin to think about the equity implications of a given tool.”

Of course, the first question to ask is whether your company is using or considering the use of any predictive tools. If so, our Data Analytics group may be able to provide assistance.   The Data Analytics Group is a multi-disciplinary team of lawyers, statisticians, data scientists, and analysts with decades of experience managing the interplay of data analytics and the law. For more information, please contact your Jackson Lewis attorney or Eric Felsberg, the National Director of the JL Data Analytics Group.

[1] Help Wanted – An Examination of Hiring Algorithms, Equity and Bias, Miranda Bogen and Aaron Rieke (Upturn, December 2018).