Generative artificial intelligence exploded into public consciousness in 2023.[1] With increased public awareness inevitably came increased regulatory scrutiny, both in the United States and abroad. While there is little doubt that government regulation of AI will increase in 2024, a different trend is also emerging – AI companies proactively embracing responsible AI principles through voluntary commitments.
Some of what we’ve seen so far:
- White House Secured Voluntary Commitments from Leading AI Companies. In July 2023, seven companies at the forefront of AI innovation (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) voluntarily committed to a series of principles put forward by the Biden administration to foster safety, security, and trust in their AI-based products.[2] Since then, eight additional companies have signed on, including Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability. The commitments include security testing, information sharing, investing in cybersecurity, vulnerability auditing, content labeling, public reporting, researching risks, and using AI to address societal issues. [3]
White House Secured Voluntary Commitments in the Health Sector. On December 14, 2023, the White House announced voluntary commitments from 28 healthcare provider and payer organizations to health sector AI commitments.[4] The commitments include ensuring that outcomes are aligned with fair, appropriate, valid, effective, and safe (FAVES) AI principles, transparency in development and use, adherence to a risk management framework, and the responsible use of open source models.
Content Provenance. It is sometimes virtually impossible to tell when works (like photos) have been altered. While this was true before generative AI (think photoshopping), companies in the generative AI space have signed on to projects like the Coalition for Content Provenance and Authenticity (C2PA) to develop open technical standards for tracing content provenance. In October 2023, Adobe, in collaboration with the C2PA, launched Content Credentials, which allow creators to add tamper-evidence metadata about their content, including whether any AI tools were used.[5]
Watermarking. Related to the above (and the White House voluntary commitments), companies have also launched watermarking tools to label AI-generated content.[6]
Opt Out of Training. While maintaining that use of third-party materials to train their models is a fair use, leading AI companies nevertheless provided creators with ways to opt out of having their materials used as training data for forthcoming models.[7]
[1]It is incredible to think how recently these tools were released. ChatGPT, arguably the tool that put generative AI on the map, was only launched in November 2022 (does anyone remember the prompt asking ChatGPT to write a biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR? That was tweeted on December 1, 2022, and The New York Times wrote about it on December 5, 2022). One of the first text-to-image models, DALL·E, was announced in January 2021 while Stable Diffusion was released in August 2022, and Firefly in June 2023. The first publicly available text-to-video model, Runway Gen-2, was first released in March 2023.
[2] The White House, “FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,” The White House Briefing Room (July 21, 2023).
[3] Brad Smith, “Our commitments to advance safe, secure, and trustworthy AI,” Microsoft (July 21, 2023); Amazon Staff, “Our commitment to the responsible use of AI,” Amazon Blog (July 21, 2023).
[4] U.S. Department of Health and Human Services, “FACT SHEET: Biden-Harris Administration Announces Voluntary Commitments from Leading Healthcare Companies to Harness the Potential and Manage the Risks Posed by AI,” U.S. Department of Health and Human Services News (Dec. 14, 2023).
[5] Adobe, “What are Content Credentials?” Adobe Blog (Oct. 10, 2023).
[6] See, e.g., Melissa Heikkila, “Google DeepMind has launched a watermarking tool for AI-generated images,” MIT Technology Review (Aug. 29, 2023); Meta, “Stable Signature: A new method for watermarking images created by open source generative AI,” Meta Blog (Oct. 6, 2023); Emilia David, “Amazon joins AI image creation fray with new model,” The Verge (Nov. 29, 2023); Emilia David, “Microsoft offers politicians protection against deepfakes,” The Verge (Nov. 8, 2023).
[7] See, e.g., Emilia David, “Now you can block OpenAI’s web crawler,” The Verge (Aug. 7, 2023); Danielle Romain, “An update on web publisher controls,” Google Blog (Sept. 28, 2023); Sharon Goldman, “Artists can opt-out of next Stable Diffusion training,” Venture Beat (Dec. 15, 2022).