Responsible AI isn’t just for BigTech. This coming year we can expect institutional, venture, and strategic investors to take an interest in whether and how well their funds and portfolio companies are identifying and managing the legal and ethical risks associated with their use of AI and, in particular, Generative AI. Similarly, the boards of companies will take an increased interest in these issues. As a result, these investors and board members will be hungry for tools that they can use to help them evaluate and then drive improvement in how their companies are addressing these concerns.
AI raises many legal and ethical concerns, including accuracy, reliability, safety, privacy, security, fairness and bias, and sustainability. Responsible AI programs assess and mitigate these risks through solid governance practices and operational implementation.
This past year, a number of AI industry standards launched that set forth the principal tenets of Responsible AI programs. We can expect in 2024 that implementation of these tenets will proliferate across sectors in both the public and private sectors.