Beyond E-Discovery: The Ethical & Legal Use Of Machine Learning Technologies

In contemporary litigation, “machine learning” and “predictive analytics” are phrases that are typically used in the context of e-discovery. However, as these technologies grow and evolve, so too will their application and utility in employment decisions and legal proceedings. At Jackson Lewis, we are committed to remaining at the forefront of these technologies and their potential uses both inside and outside of employment litigation.

While the majority of law firms and lawyers may use predictive coding or machine leaning features in some e-discovery platforms, our attorneys and skilled team of data scientists and coders are consistently expanding our already robust data and predictive analytics resources, and routinely use these technologies to assist clients with analyzing potential damages, value potential liability or jury verdicts by analyzing similar claims, develop litigation and negotiation strategies, as well as evaluate probabilities of success throughout all stages of litigation, arbitration, and other administrative investigations and proceedings. We also use these and other technologies, like artificial intelligence, to develop proprietary applications and algorithms.

Predictive analytics, machine learning, and AI raise a number of privacy and ethics concerns in society, but when utilized properly, can prove to be an invaluable asset to our clients both inside, and outside, the context of litigation. Indeed, employers are rapidly deploying these technologies across the employment spectrum, from identifying potential job candidates, conducting initial applicant screenings, tracking working time and attendance, identifying potential promotion candidates, as well as in workforce restructuring.

Employers definitely should embrace, and not fear, implementing these technologies, especially given their trajectory towards becoming essential to business in the modern era. However, like with any tool or asset, employers should be cognizant of how AI, machine learning and predictive analytics are designed, deployed, and monitored to avoid unintended biases and ensure compliance with applicable law. Taking a proactive and preventative approach to machine learning and predictive analytics is particularly important for developing defensible positions should the use of machine learning or predictive analytics be subjected to scrutiny in litigation, or otherwise.

Our attorneys and data scientists are experienced with helping clients navigate the legal and ethical waters of these technologies, and regularly aid in identifying, and remedying, potential compliance issues and unintended biases, and when necessary, defending their use of these technologies. We also utilize predictive analytics, in an ethical and legally compliant manner, in defending against a variety of employment claims.

Can Employers Use Artificial Intelligence And Data Analytics To Track Remote Workers?

Notwithstanding federal, state, and local privacy and cybersecurity laws that may apply, employers may generally use artificial intelligence, data analytics, and other software and technologies to track remote workers.

The COVID-19 pandemic has resulted in, if not required, the vast majority of businesses to adopt remote work and virtual workplaces as a means of operational necessity, and to promote the health and safety of its employees and clientele. With these shifts, employers have expeditiously looked to new and alternative means for tracking employee performance, monitoring productivity, and ensuring the security of their networks and computer systems. Artificial intelligence, machine learning, and data analytics are useful tools that employers can, and should, utilize to accomplish these, and other employment-related objectives.

To illustrate, a standard time-keeping application can serve as a virtual substitute for a physical timeclock. However, there are other methods for tracking employee performance, activity, and working time that can and should be considered if no other reason but to serve as a way to verify a remote employee’s time report. Employers that use a virtual private network (“VPN”) or remote/cloud computing can cross-reference a remote worker’s network connection and activity data to confirm the start and stop times reported on a separate timeclock application. Artificial intelligence and machine learning can be deployed to passively acquire login and other activity data that can help employers track employee work activity, and identify patterns, abnormalities, etc. on an enterprise-wide basis.

Employers and their IT professionals should take a holistic approach when evaluating available technologies and data sources to identify the systems, data, and metrics that are most appropriate, and effective, for accomplishing their needs. By properly designing and monitoring these technologies, employers can successfully minimize the potential liability stemming from a myriad of issues associated with remote work and ensure compliance with company policies. For example, potential liability stemming from irregular work schedules, overtime issues, and violations of work and leave-related restrictions, including prohibitions on non-exempt employees sending/reading work-related emails after clocking out or outside working hours.

Of course, businesses should also consult with legal counsel to ensure that their employee monitoring and tracking abilities are compliant with applicable law, and do not result in unintended biases that could potentially cause a disparate impact on protected classifications.

New Freedom of Information Act (FOIA) Software Launched By EEOC

On February 1, 2021, the Equal Employment Opportunity Commission (EEOC) launched a new software system to receive and process Freedom of Information Act (FOIA) requests and objections.  FOIA allows individuals the right to request access to federal government records.

Individuals looking to make FOIA requests can now initiate the requests using the new software system.  Any requests submitted prior to February 1 will be able to access the status of their requests via the old Public Access Link until March 12, 2021 but will then need to access all requests using the new software system.

While the new software system will make it easier to submit FOIA requests and track the status of such requests, the EEOC will still accept FOIA requests and objections via mail, email, and fax.  Once requests are received via mail, email, or fax, the EEOC will enter the requests into the new software system for processing and monitoring.

The EEOC FOIA software system can be accessed here.

Biden Appoints Jenny Yang as Director of Office of Federal Contract Compliance Programs

On January 21, 2021, the Office of Federal Contract Compliance Programs (OFCCP) announced via its leadership team webpage, that Jenny Yang, former EEOC Chair during the Obama administration, was selected as OFCCP Director.

During Jenny Yang’s time with the EEOC, she was a supporter of the Component 2 section of the EEO-1 reports, which required contractors to submit pay and hours worked data on their employees for 2017 and 2018. In addition to advocating for the government collection of pay data, Jenny Yang also worked towards expanding protections under federal anti-discrimination law for the LGBTQ+ community and established a Select Task Force on the Study of Harassment in the Workplace.

Based on her work at EEOC, we can expect Director Yang to prioritize pay equity and sexual orientation and gender identity rights. Jenny Yang takes over as OFCCP Director following Craig Leen, who implemented the four “pillars” of the OFCCP, Transparency, Certainty, Efficiency, and Recognition, allowing federal contractors to rely on the “pillars” in interactions with the OFCCP. It will be interesting to see if Director Yang continues using the “four pillars” during her time at the OFCCP.

New Deputy Chief Data Officer at EEOC Office of Enterprise Data and Analytics

On January 19, 2021, the U.S. Equal Employment Opportunity Commission (EEOC) announced that Kimberly S.L. Essary has been appointed as Deputy Chief Data Officer within the agency’s Office of Enterprise Data and Analytics (OEDA).

Essary’s appointment expands OEDA’s existing executive leadership, further illustrating the EEOC and OEDA’s mission to build a 21st century data and analytics organization. In this senior executive role, Essary is expected to manage the full range of program activities within OEDA.

First joining the EEOC as a career attorney-advisor in 2009, Essary’s appointment is a testament to her years of experience serving in high-level roles at the EEOC and other related governmental departments. Essary has served as OEDA’s Deputy Director and Senior Counsel since 2018 and played an integral part in its development and creation, as well as the creation of EEOC’s Data Governance Board. During her tenure, Essary also has served as a federal data fellow in the U.S. government’s Federal Data Strategy Team – a group of cross-disciplined data professionals tasked with developing the government-wide Federal Data Strategy.

Speaking on her appointment, Essary said, “I am honored to have the opportunity to serve in this new role. I look forward to building upon the agency’s work in recent years to modernize our data and analytics capabilities and to better serve the American people through data-driven decision-making.”

Agency Takes Step to Becoming ’21st Century Data and Analytics Organization’ with EEOC Explore

The launch of “EEOC Explore” is the U.S. Equal Employment Opportunity Commission’s (EEOC) latest step toward its vision to “build a 21st century data and analytics organization.”

In December 2018, the EEOC created the Office of Enterprise Data and Analytics (OEDA). Since its inception, the OEDA has been working on modernizing EEO data availability. As part of its December 2, 2020 launch of EEOC Explore, the agency demonstrated the features of its new tool, which furthers OEDA’s vision by working to modernize EEO data availability.

A data query and mapping tool, EEOC Explore aggregates publicly available EEO-1 data (currently limited to EEO-1 data sets from 2017 and 2018) into a series of interactive dashboards. The OEDA is expected to further develop and expand the size, scope, and functionality of EEOC Explore as its work continues.

EEOC Explore visualizes employment characteristics and information without identifying any employer or employee confidential information, protecting the EEO-1 data that was previously submitted by employers. The underlying EEO-1 data is then presented to users through five primary dashboards:

(i) job categories by gender;

(ii) race/ethnicity by gender;

(iii) industry;

(iv) state trends by year; and

(v) comparison by state.

Once a user selects a dashboard, they can filter the applicable data by year, geography, sex, race/ethnicity, job category, NAICS code (2-digit), and NAICS code (3-digit) to view more granular data. Within each dashboard, users can view the data’s aggregated totals by category and use filters to break down the information by race, gender and job category, or focus in on one specific race, gender, and job category.

During the demonstration, the OEDA used the example of comparing employment in the finance industry between Silicon Valley and the Atlanta Metro area, starting with the Industry dashboard, then filtered by the year (2018), the Silicon Valley and Atlanta Metro geographic areas, and the finance and insurance NAICS codes.

EEOC Explore is a useful, ready-made tool, but employers understandably may require context and more in-depth analysis to meet the legal and operational needs of their organization.

The Jackson Lewis Data Analytics Group is available to answer any questions about OEDA and the use and application(s) of EEO-1 information, and help make sense of the data you already hold.

Don’t Forget to Evaluate the Selections in Reductions in Force During COVID-19 Pandemic

The COVID-19 pandemic has affected the workplace in ways we could not have imagined just a few months ago. Indeed, the economic impact caused by the COVID-19-related shutdowns has prompted many employers to reevaluate how to conduct business and the number of employees they need. Consequently, many employers have been compelled to consider reductions in force. While not limited to times of pandemic, the basic principles apply.

Commonly, employers approach the exercise with a plan in mind: Which business lines are affected? How much payroll do we need to shed? What are the criteria for selection? Which employees will be selected for reduction? Who will make the selections? And more …. In addition, employers should consider the impact of their selections on groups of individuals of a certain demographic.

Considering Title VII of the Civil Rights Act of 1964, for example, employers should ask whether the rate at which females are selected for reduction is greater than the rate at which males are selected? How are minority employees faring against non-minority employees? And, considering the Age Discrimination in Employment Act, what about older employees as compared to younger employees? Additionally, are the differences in rates statistically significant? A disparate impact analysis can help answer these questions and possibly substantially mitigate risk for employers.

What if there is statistically significant disparate impact against a particular demographic? Confirm the criteria used to select the individuals for reduction are “consistent with business necessity” under Title VII. If a criterion does not meet the standard, revisit the selection(s).

Remember, intent need not come into the equation as these analyses examine the impact facially neutral selection criteria may be having on certain demographic groups. Of course, protected characteristics generally should not be included among the criteria being used to select individuals for selection. Be mindful of the importance and applicability of the attorney-client privilege when conducting these analyses and consider retaining a law firm to help.

FTC’s Tips on Using Artificial Intelligence and Algorithms

Artificial intelligence (AI) technology that uses algorithms to assist in decision-making offers tremendous opportunity to make predictions and evaluate “big data.” The Federal Trade Commission (FTC), on April 8, 2020, provided reminders in its Tips and Advice blog post, Using Artificial Intelligence and Algorithms.

This is not the first time the FTC has focused on data analytics. In 2016, it issued a “Big Data” Report. See here.

AI technology may appear objective and unbiased, but the FTC warns of the potential for unfair or discriminatory outcomes or the perpetuation of existing socioeconomic disparities. For example, the FTC pointed out, a well-intentioned algorithm may be used for a positive decision, but the outcome may unintentionally disproportionately affect a particular minority group.

The FTC does not want consumers to be misled. It provided the following example: “If a company’s use of doppelgängers – whether a fake dating profile, phony follower, deepfakes, or an AI chatbot – misleads consumers, that company could face an FTC enforcement action.”

Businesses obtaining AI data from a third-party consumer reporting agency (CRA) and making decisions on that have particular obligations under state and federal Fair Credit Reporting Act (FCRA) laws. Under FCRA, a vendor that “assembles” consumer information to automate decision-making about eligibility for credit, employment, insurance, housing, or similar benefits and transactions may be a “consumer reporting agency.” An employer relying on automated decisions based on information from a third-party vendor is the user of that information. As the user, the business must provide consumers an “adverse action notice” required by FCRA if it takes an adverse action against the consumer. The content of the notice must be appropriate to the adverse action, and may consist of a copy of the “consumer report” containing AI information, the federal summary of rights, and other information. The vendor that is the CRA has an obligation to implement reasonable procedures to ensure the maximum possible accuracy of consumer reports and provide consumers with access to their own information, along with the ability to correct any errors. The FTC is seeking transparency and the ability to provide well-explained AI decision-making if the consumer asks.

Takeaways for Employers

  • Carefully review use of AI to ensure it does not result in discrimination. According to the FTC, for credit purposes, use of an algorithm such as a zip code could result in a disparate impact on a particular protected group.
  • Accuracy and integrity of data is key.
  • Validation of AI models is important to minimizing risk. Post-validation monitoring and periodic re-validation is important as well.
  • Review whether federal and state FCRA laws apply.

Continue self-monitoring by asking:

  • How representative is your data set?
  • Does your data model account for biases?
  • How accurate are your predictions based on big data?
  • Does your reliance on big data raise ethical or fairness concerns?

The FTC’s message: use AI, but proceed with accountability and integrity. Jackson Lewis’ Data Analytics Group is available to discuss questions about validation and data integrity.

Analytics at Large: COVID-19 Data vs. HR Data — What’s the Difference?

By: Samantha Rhoads & Michael Cortes[1]

The COVID-19 pandemic has put a spotlight on statistical terms often unfamiliar to anyone but such professionals as statisticians and data scientists. Terminology such as infection rates, “flattening the curve,” and related statistical information are now being used as slogans and hashtags. This post offers a brief explanation of data analytics and the related math for a better understanding of statistics and data.

Being inundated with information and government-provided mathematical models and projections makes it difficult for citizens to know what statistics to trust. Statistics helps make sense of data. The accuracy of the insights, though, is limited by the quality and accuracy of the data itself. From one data source, the COVID-19 fatality rate calculated may be 4.28%. From another source, the rate is actually 0.66%. The fundamental reasons for these different numbers are:

  1. The numbers are based on different datasets that were collected using different methods.
  2. The statistical models applied to the data are based on different assumptions.

How do you know what to rely upon? This is where the human element comes into the picture. When given data and statistics, be skeptical. Apply common sense and trust your own thinking, and find other research done from other sources to get a more complete picture. Understand the assumptions being made and how the data was collected.

People can mentally process trends in the data in order to take precautions and understand how the world is being shaped. Looking directly at the numbers is vital in understanding COVID-19’s path, and applying good sense and judgment is critical.

While the data around COVID-19 is a critical issue now, the same data and statistical concepts apply to data sets in various areas. For example, the same critical thinking should be applied against workplace trends and data as is being done with the COVID-19 trends. Work lives are more unclear at this time, which means defining patterns and understanding the nuances in the data is critical.

Companies’ employee data should be harnessed and explained. This means, for example, if a company wants to understand the diversity of its workforce, it needs to be careful about how it is computing those numbers. Pay special attention to the way the data is collected and what the variables mean. It is easy to be led astray if the company is computing the wrong numbers or interpreting the results incorrectly.

While data can be used to make inferences, there are caveats. Numbers “don’t lie,” but they may be taken out of context. The assumptions around the calculations and the data usually are not fully understood. Keep the reliability of the data in mind and ensure the circumstances of a pattern are considered before purporting it as something meaningful.

 

[1] Samantha Rhoads and Michael Cortes are Data Scientists in the Firm’s Data Analytics Group.

LexBlog