As the use of AI in hiring and employment practices grows, U.S. states are increasingly regulating such use.
Currently, Illinois, Maryland, and New York City require employers to inform applicants about, and obtain their consent for, the use of AI during the hiring process. New York City also specifically requires a bias audit for AI tools, the results of which must be published annually. Several other states—including California, Hawaii, Massachusetts, Oklahoma, Pennsylvania, New Jersey, New York, and Vermont Virginia, and Washington and also District of Columbia—are considering similar legislation, signifying a trend towards more regulation of AI in hiring and employment decisions.
Outside of specific laws targeted at AI in the employment context, state consumer privacy laws are also relevant to the use of AI in employment. For example, California’s CCPA applies to employee personal information, and almost all state privacy laws require an opt-out for profiling in furtherance of decisions producing legal or significant effects concerning a consumer, including the provision or denial of employment.
On a federal level, the Equal Employment Opportunity Commission issued guidance in May 2023, emphasizing potential discrimination risks when using AI under the Americans with Disabilities Act and Title VII of the Civil Rights Act. Proposed bills like the 2023 Algorithmic Accountability Act, though not enacted, also suggest a possible move towards national regulation.
Given this evolving landscape, state legislatures likely will continue to develop rules around the use of AI in employment.
Carson Martinez contributed to authoring this blog post.