At the Intersection of Technology, Law, and Business
March 29, 2019 - Artificial Intelligence

Employment Practices and Future Technologies - Taking the Human Out of Human Resources

Clashing With Second Circuit, Court Orders Google to Turn Over Foreign-Stored Data

The machines are taking over.  Not always apparent, but Artificial Intelligence (AI) and machine learning (ML) are finding footholds in numerous industries.  One area in which AI is rapidly on the rise relates to employment practices and employee control.  But before embracing the future, employers should be well informed about the risks inherent with ceding control to robots. 

Black Box Hiring Practices

Companies are only as good as their employees.  With the current strong job market, employers of all sizes struggle to identify, recruit, and hire talent.  As companies grow, many have turned to novel technologies to assist, guide, and (in some instances) take over the talent recruitment and selection process.  According to a December 2016 Harvard Business Review article, in some cases AI screening software systems used by businesses eliminate more than 70 percent of job applicants without any human interaction.[1]

Companies tout the efficiencies offered by AI hiring algorithms.  They can save time and money, and (in theory) help identify the candidates best-suited for open positions.  But, what happens when an apparently-qualified job seeker is summarily rejected because a computer algorithm decided she was not a good fit?  The candidate may ask for more details—specifically what in her resume eliminated her from consideration?  Was it the school the candidate attended?  Maybe the computer rejected her because it thinks she had an unacceptable  gap between jobs based on the summer she spent in the Middle East for what appears (to the computer program) to be a religious pilgrimage.  Perhaps the candidate’s professed interest in baking set off silent alarms in the AI framework because it has seen too many data sets that confirm bakers are almost always poor people managers.  Or maybe the candidate submitted her application very late at night, and the AI has determined candidates emailing after business hours more than likely procrastinators and underperformers. 

Of course, some of these connections appear absurd.  Whether a company is reviewing 10 resumes for a position or 1,000, challenging decisions—not always cleanly tied to concrete data points—must be made when screening or eliminating potential job applicants. The benefit of AI and ML is they (for better or worse) make connections using data points humans never would consider.  This benefit can have an unsavory side.  Where it may save the company time and money in the hunt to identify top talent, unrefined AI may have hidden biases that create illegal hiring decisions without company personnel even realizing it.  Because “algorithms mimic human decision making,” they can “learn from past successes, which may embed existing bias.”[2]  Take the example of a company dominated by white male leaders that has successfully run for years, consistently posting profits with above-average employee satisfaction.  The AI is likely to be programmed with a goal of maintaining that level of success, and the company may inadvertently input data points that provide a bias towards preferring candidates with profiles that are largely white men.  Even if the company intends to eliminate bias from AI selections, the AI making the initial screening decisions is unlikely to worry about discriminatory outcomes.  Using AI and ML systems with such latent biases risks excluding applicants based on protected classes (like age, race, gender, or disability) and exposing the company to discrimination claims. 

Although such concerns may encourage employers to limit the use of AI and ML, companies can still benefit from the use of such technology if they implement appropriate data input, human oversight, and reasonable guardrails to minimize biases in these programs.  Companies should regularly observe data sets used by the AI system, with a particular focus on how the AI is incorporating, connecting, and utilizing the data provided.  An example of an appropriate guardrail is adding a coded instruction limiting the weight AI applies to certain factors likely to result in biased decisions (like gender, country of origin, and religious affiliation).[3]  Employers should also consider avoid limiting or prohibiting AI from analyzing names or other data points about an applicant that may expose the AI to learning a candidate’s protected characteristics (e.g., ethnicity).  Continual monitoring of AI and ML systems will also increase the likelihood of them remaining free of unlawful biases.

Big Data

In a similar vein, most major companies are using large datasets known as “Big Data” to better manage and track inventory, automatically identify and service equipment failures and security breaches, create safer work environments, and improve employee efficiency.  Big Data can also be replete with hidden liabilities for the unaware.  If a company, for example, is harvesting biometric information for employees, such as fingerprints, handprints, or optical scans, for security access to facilities or for time-tracking systems, the company could face increased litigation risks. For example, Illinois state courts have recently experienced a significant increase in privacy class actions under the Illinois Biometric Information Privacy Act (BIPA), which requires employers to provide written notice and obtain consent from employees and customers prior to collecting or storing biometric data.  Although many states do not currently have the same protections as Illinois, other states like Texas and Washington have proposed laws governing employer use of biometric information. 

The data some employers are capturing for seemingly legitimate reasons could run afoul of various privacy protections.  For example, a company may desire to improve safety for workers by embedding connected sensors in employee uniforms to detect toxins or chemicals in the workplace or monitor employee vitals when performing physically demanding tasks.  If those same sensors inadvertently monitor employee movement patterns or identify legally protected medical conditions, absent an employer having a legitimate justification for collecting that data and appropriate notice to the employees, it could expose the employer to liability in some jurisdictions.  Likewise, the employer could use data collected on employee movements in the workplace to inadvertently make unlawful employment decision.  For example, if an employee takes frequent breaks during the workday, the company may discipline the employee, believing the employee is violating company policy or failing to meet performance standards.  But, in reality, the employee may be taking increased breaks due to a protected medical condition that causes him to use the restroom frequently.   

While using Big Data in the workplace can make good business sense, employers should keep in mind the following considerations when relying on Big Data.

  • Know what data is being collected and why it is being collected.  Indiscriminately collecting any and all data available is not advised.  Companies should be mindful of what types of data they are collecting, and why they are collecting it.  Generally, when deciding what information to collect, document the strong business case for collecting that data to show how its impact on employee privacy interests will be limited.  Excessive data collection is harder to defend.  Working with experienced in-house or external professionals and attorneys to review and audit data collection can help identify potential liabilities and put appropriate safeguards in place to limit exposure. 
  • Give proper notice to employees.  Even in jurisdictions where notice of data collection is not required, companies should be transparent with their employees regarding data collection that might infringe on employee privacy.  Such transparency can reduce the possibility of litigation and related bad press.
  • Be thoughtful about how long the data is stored.  Data storage is continually getting cheaper, but that does not mean companies have free rein to store gargantuan amounts of information on employees.  The longer companies store data, the more potential for legal risk.  If maintaining data for long periods is needed for a legitimate business reason, companies should consider aggregating the data in a manner that removes personalized employee information.

[1] “Hiring Algorithms Are Not Neutral,” Harvard Business Review, G. Mann and C. O’Neil, December 9, 2016, available at https://hbr.org/2016/12/hiring-algorithms-are-not-neutral.

[2] Id.

[3] Morrison & Foerster’s senior of counsel Lokke Moerel claims that bias does not result because “there is an error in the algorithm, but because the data use to train the algorithm are ‘biased.’”  She further argues that the practice of “blinding” algorithms to certain data points does not promote equality or fairness.  Rather, the solution is to allow the algorithm access to, but actively train the algorithm against overreliance on, those factors.   “Algorithms Can Reduce Discrimination, but Only with Proper Data,” 2018, available at https://iapp.org/news/a/algorithms-can-reduce-discrimination-but-only-with-proper-data/