
0
Italy’s prime minister outsmarted AI abusers by posting a surprising image
May 6, 2026
Posted 8 hours ago by
Yesterday, Giorgia Meloni posted to X an AI-generated photo of herself wearing only lingerie. The Italian prime minister published the image to warn others about how easy it is to create perfectly believable images and videos. Her warning: Never believe anything you see without thoroughly fact-checking it. After all, we live in the end of reality. “Deepfakes are a dangerous tool, because they can deceive, manipulate, and hit anyone,” Meloni said on X.

“I can defend myself. Many others don’t.” She is right, even though the image is not technically a deepfake. It’s a fully AI-generated photo that features her face. Unlike early deepfakes, which simply switched the face of one human in a base source photo with the face of another human, generative AI can use different components—like real faces, bodies, places, voices, and sounds—to create a 100 new synthetic media. This process makes its true nature virtually, if not completely, undetectable: Since you can’t reverse-search and match the base image to an original source on the web, you can believe it is original (and real). [Screenshot: Twitter/X] Meloni has already sued two men for creating a deepfake porn video of her in 2024. This time around, she joked that the fakes look “a lot” better than herself and posted the image as a very 2026 PSA. “This is why a rule should always apply: Check before believing, and believe before sharing. Because today it happens to me; tomorrow it can happen to anyone,” she wrote. Meloni showed courage by putting herself out there, but more must be done than doling out advice. We are way past the point of education. The world needs action. Generative AI poses an existential danger to humanity. It can weaponize our psychological biases, effectively destroying our shared sense of objective reality. Just look at the last few months. There’s Jessica Foster, an AI-generated, pro-Trump military influencer who amassed a million followers in just three months to funnel men toward an adult fetish site (her account was later deleted from Instagram). And even though Foster’s digital persona was riddled with obvious rendering glitches and absurd scenarios, unlike Meloni’s images, her followers willfully ignored them because the mirage perfectly satisfied their ideological fantasies. When a legitimate video was released proving that Israeli Prime Minister Benjamin Netanyahu was alive following assassination rumors, the internet—aided by hallucinating AI chatbots—instantly and falsely dismissed the footage as a deepfake. Even after independent analysts and fact-checkers provided irrefutable proof that the video was authentic, the evidence failed to sway those who preferred their own conspiracy theories. Every politician must act now Trapped in this unreal dystopia where the perimeter of objective truth has been completely vaporized by tech giants, society needs more than an X post. Public awareness and educational campaigns are no longer sufficient to combat the huge human and economical cost that this is already causing. The only remaining exit strategy to save our shared reality is for global governments to aggressively intervene and force technology companies to adopt hardware and software that can authenticate real photos, videos, and audio beyond any shadow of a doubt. In March, a team at ETH Zurich proposed the only solution that feels serious enough for the scale of the threat: sensors that cryptographically sign an image at the exact moment light and audio hit them. Unlike today’s systems, which stamp authenticity via the device’s main processor—leaving them vulnerable to interception and tampering—this design locks verification directly into the act of capture itself.In plain terms, it would make it vastly harder to pass off synthetic media as real, because the proof of authenticity would be born inside the hardware, not added afterward by software that can be spoofed. That way, people can look for the “stamp of truth” in any media published everywhere, from publications to social networks. And anything without that stamp, like Meloni says, should be automatically doubted and disregarded. States must also act to give tools to their citizens to take down any image that uses their faces, by enabling laws like the Digital Millennium Copyright Act in the U.S. But rather than forcing regular people to copyright themselves, they should be able to easily take down any unauthorized AI versions of themselves in any public web or social network. Right now, only the Danish government has done this. In an effort to offer protection against AI cloning of its citizens, last year the country rewrote its legal code to guarantee that residents strictly own the rights to their biological faces and natural speaking voices. Danish Culture Minister Jakob Engel-Schmidt summed it up perfectly back then: “Human beings can be run through the digital copy machine and be misused for all sorts of purposes, and I’m not willing to accept that.” The Meloni case, one of millions, shows once again that the Danish minister for culture is 200 right when he declared the urgency of the law his government passed. We need to stop this problem decisively with those tools and any others that lawmakers and engineers can come up with. Fake images—as long as they are within existing legal limits—can coexist with reality just fine. But the tech giants profiting off of this problem have to act now.
Fast Company
Coverage and analysis from United States of America. All insights are generated by our AI narrative analysis engine.