At the Intersection of Technology, Law, and Business
December 22, 2022 - Artificial Intelligence

AI Trends for 2023 - AI Diagnostic Tools and Cybersecurity Risk

Service-Level Agreement Basics

Cybersecurity is a key risk factor for emerging applications of artificial intelligence to medical devices. Products failing to address security risks may leave themselves vulnerable to cyberattacks. Bad actors target hospitals and healthcare systems, in particular through ransomware, credential harvesting, or device theft. According to an IBM report, the average cost of a healthcare data breach is now $10.1 million per incident, and the healthcare industry suffered the most expensive data breach costs of any industry examined in the report. AI opens the sector to a new potential liability as researchers have shown that AI models can be tricked using falsified medical images. Industry participants should be aware that, while the FDA has generally treated the healthcare sector as victims with respect to cyberattacks, regulators may be adjusting their view. In 2022, the FDA released draft cybersecurity guidance stating that it would withhold premarket clearance or approval of a medical device if cybersecurity is not adequately addressed. To mitigate these risks, companies may consider adversarial training for their AI models. Pre-generating adversarial data to teach the model that this data is manipulated may make the device’s diagnostic tools more robust.

This post is a part of a series on trends in the artificial intelligence space for 2023, authored by MoFo lawyers.