Technology is known for solving some of the world’s most complex problems, especially through automation and AI. However, when the right guardrails are not in place, ethical issues arise, such as malfunctioning tools, data or algorithms that exclude certain persons, intrusive technologies that violate human rights, and technologies that compromise safety.
Ethical technology, which forms part of a company’s overall ESG strategy, requires companies and entities deploying technology to carefully consider and monitor it, scrutinizing for quality, equity, and safety in the creation, deployment, and post-deployment stages. This will help monitor externalities from technology use and mitigate or remediate risk where necessary.
Stakeholders, such as regulators, are leading the charge with regulations and policies on specific issue areas, such as the Additional Federal Guidance on AI risk management, the EU AI Act and other U.S. state-specific regulations. To complement stakeholder efforts, we are supporting companies and developers in preemptively addressing ethical issues in technology deployment as part of their ESG framework, and we expect to see more responsible governance and guardrails set by companies and developers to preempt, prevent, and mitigate technological risks from development to post-deployment stages.
This post is a part of a series on trends in the artificial intelligence space for 2023, authored by MoFo lawyers.