My Account Login

Cyberthreat actors using ChatGPT exploit to attack health care, other industries

A ChatGPT vulnerability identified last year is being used by cyberthreat actors to attack security flaws in artificial intelligence systems, according to a March 12 report by Veriti, a cybersecurity firm. The National Institute of Standards and Technology lists the vulnerability as medium risk, but Veriti said it has been used by cyberthreat actors in more than 10,000 attack attempts worldwide. Financial institutions, health care and government organizations have been top targets for the attacks, the firm said. The attacks could lead to data breaches, unauthorized transactions, regulatory penalties and reputational damage. 
 
“This could allow an attacker to steal sensitive data or impact the availability of the AI tool,” said Scott Gee, AHA deputy national advisor for cybersecurity and risk. “This highlights the importance of integrating patch management into a comprehensive governance plan for AI when it is implemented in a hospital environment. The fact that the vulnerability is a year old and a proof of concept for exploitation has been published for some time is also a good reminder of the importance of timely patching of software.” 
 
For more information on this or other cyber and risk issues, contact Gee at sgee@aha.org. For the latest cyber and risk resources and threat intelligence, visit aha.org/cybersecurity.

View full experience

Distribution channels: Healthcare & Pharmaceuticals Industry