Doctors' growing AI deepfakes problem
0
Politics

Doctors' growing AI deepfakes problem

May 6, 2026
Scroll

Posted 2 hours ago by

AI is helping make doctors the unwitting stars of deepfake videos that hawk questionable products or spread misinformation, prompting calls from clinicians for more privacy and transparency laws.Why it matters: The profusion of AI content on social media platforms could further erode public trust in the medical establishment. It could also be used to fuel insurance fraud, steal data and put patients at risk.Driving the news: The American Medical Association called on federal and state lawmakers last week to close legal gaps and modernize identity protections to address what its CEO John Whyte called a public health and safety crisis.The physicians group also wants a crackdown against deepfake creators and rules to force tech platforms to more quickly remove impersonations.California has already taken steps like requiring disclosures on AI-generated ads and is debating a measure that would explicitly ban doctor deepfakes.Pennsylvania's medical board addressed another form of AI impersonation on Tuesday, demanding that a tech company cease and desist after one of its chatbots posed as a doctor claiming to have a license to practice medicine in the state.Physicians say they're increasingly discovering instances in which their identities are used to promote wellness and longevity supplements and unapproved medical devices.It's becoming more mainstream.

Doctors' growing AI deepfakes problem

Everyone knows someone who this has impacted, said Whyte. It's probably occurring more than we hear because people are embarrassed by it.Among the victims: CNN's Sanjay Gupta, who said fakes using his likeness to promote items like a breakthrough Alzheimer's cure have gotten so convincing they've even deceived some acquaintances.What was different this time around was just the quality of these ads, Gupta recently told CNN's Terms of Service. This was really quite stunning.Threat level: Doctors could be sued if patients are harmed taking counterfeit products or following advice the real physician never actually gave, Whyte said.The AMA is seeking guidance on how targeted physicians should respond and how malpractice and cyber liability insurance can help.The deepfakes aren't limited to people. Health systems are uncovering faked diagnostic images and other clinical data that can wreak havoc internally.A recent study in Radiology found most clinicians failed to spot deepfake X-rays. One-quarter missed the fakes even after being warned to look for telltale characteristics like unnatural soft tissue textures and overly smooth bone surfaces.The fakes could be used to defraud insurers or stoke litigation, lead author Mickael Tordjman from the Icahn School of Medicine at Mount Sinai said.There is also a significant cybersecurity risk if hackers were to gain access to a hospital's network and inject synthetic images to manipulate patient diagnoses or cause widespread clinical chaos, he said.The bottom line: AI is undermining trust in a profession where credibility can be the difference between life and death.We shouldn't have to make the public detectives to determine whether something's not a deep fake, Whyte said.

Axios
Axios

Coverage and analysis from United States of America. All insights are generated by our AI narrative analysis engine.

United States of America
Bias: center

People's Voices (0)

Leave a comment
0/500
Note: Comments are moderated. Please keep it civil. Max 3 comments per day.
You might also like

Explore More