Vendor procurement practices will continue to evolve in 2024 to reflect corporate AI risk management and governance policies. While companies are beginning to appreciate that vendor work product may be developed using AI tools unless their contracts specify otherwise, they have not fully appreciated what that means for their businesses.
If, for example, an organization is hiring a service provider to produce a work of authorship, and the organization values the ability to preclude others from copying, distributing, publicly performing, or creating new works based on the vendor’s work, then the organization may want to consider building clear guidelines into the service provider’s contract for whether and how generative AI tools may be used. This is advisable because, at least as of now in the United States, the Copyright Office has made it clear that there are some substantial hurdles to obtaining copyright protection for generative AI output. Software is a work of authorship under the Copyright Act, so this applies not only to the use of generative AI to create artwork or reports but also to the use of generative AI for software development.
Beyond intellectual property rights issues, vendor use of generative AI can also present challenges with respect to the accuracy, reliability, or biases in the output, as well as risks to confidentiality, data privacy, security, and safety. Different use cases offer different benefits and raise different concerns (and in some cases, little to no concern at all). As the use of AI in the enterprise grows, it is more important than ever to evaluate risks in light of the use case and develop procurement practices and contract terms that map to those risks.