Both domestically and internationally, in 2023 we expect to see continued government scrutiny and regulation of AI tools and their use.
- AI Bill of Rights. In October 2022, the White House released the Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People (the “Blueprint”). The Blueprint outlines five principles and related practices designed to “support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated system.” The five principles are: (1) safe and effective systems; (2) algorithmic discrimination protections; (3) data privacy; (4) notice and explanation; and (5) human alternatives, consideration, and fallback. The Blueprint followed on the heels of the White House’s release in September of six principles for enhancing competition and tech platform accountability.
- Additional Federal Guidance on AI Risk Management. In January 2023, the National Institute of Standards and Technology (NIST) is scheduled to release its AI Risk Management Framework (the “Framework”) and corresponding Playbook (the “Playbook”) (a draft of the Playbook is available; recordings from the October 2022 workshop where the Framework was discussed can be found here). The Framework provides a process for managing AI risks.
- Local Regulation in the United States. In addition to AI-specific laws governing the use of AI in employment decisions in Illinois and Maryland, in November 2021, the New York City Council passed the Automated Employment Decision Tool Law (AEDT). While originally scheduled to take effect on January 1, 2023, in response to substantial public comment on the proposed implementing rules and the need to have a second public hearing on them, on December 12, 2022 the New York City Department of Consumer and Worker Protection (“NY-DCWP”) announced that its enforcement of AEDT would be pushed back to April 15, 2023. The AEDT prohibits an employer or employment agency from using an automated employment decision tool (“tool”) to screen a candidate or employee for an employment decision unless (1) the tool is subject to a bias audit within the year prior to its use; (2) a summary of the results of the bias audit and the date the tool was distributed are published on the relevant employer’s or employment agency’s website prior to use of the tool, along with a notice that the tool is being used at least ten days prior to its use; (3) candidates are allowed to request an alternative selection process or accommodation (although an employer or employment agency is not required to provide an alternative selection process); and (4) information about the type of data collected for such tool and the source of data and the relevant employer’s or employment agency’s retention policy are made available to candidates. Given ambiguities in the AEDT, in October 2022 the New York City Department of Consumer and Worker Protection published proposed rules regarding implementation of the AEDT. We may see the first litigation around implementation of the AEDT in 2023.
- EU AI Act. The EU is finalizing the AI Act, its first comprehensive regulation of AI systems. The proposed law regulates AI systems based on the AI’s application classification in one of three risk categories. More detail on the AI Act and AI Regulation in Europe can be found here.
This post is a part of a series on trends in the artificial intelligence space for 2023, authored by MoFo lawyers.