
0
AI labels were supposed to help users spot fakes. Here’s why they’re failing
May 6, 2026
Posted 2 hours ago by
Fake accounts have been around as long as social media. So when it was recently revealed that a “hot girl” MAGA personality named Emily Hart was actually a 22-year-old male medical student in India, it might have seemed a little mundane. Just another catfisher, another sock puppet, another scammer—the internet is full of them. Except this one had photos.

And videos. And thousands of followers across multiple networks with some posts getting millions of views. Emily Hart was a full-on influencer, not just some anonymous egg. The person who created Emily confessed to Wired that while the account was active, he was making thousands of dollars every month from posting softcore videos to an OnlyFans competitor and merchandising. Emily’s creator is not a developer. He’s just a cash-strapped student with a good sense of American political culture and a Google Gemini account. But the curious case of Emily Hart has exposed how AI has made it incredibly easy for almost anyone to create convincing content and game the system of engagement on social media. It also raises the question: Is anyone looking out for us out there? How can you tell what’s real and what’s not anymore? And who is responsible for alerting social media users that the images they’re looking at might have come from AI? The fake influencer template The major implication of the story isn’t about a single AI influencer. It’s that this is the tip of the iceberg. AI has made creating online personas like Emily so easy that it’s enabled deception at scale. The Wired story points to other pro-Trump fake influencers like Jessica Foster, but you don’t have to look very far in your Instagram Explore page before you spot something AI-generated, and it’s rarely disclosed. The Emily Hart case proves that the template is cheap, fast, lucrative, and easy to copy. All the major social networks have policies governing AI content. While they vary in detail, the gist is generally the same: Synthetic images must be disclosed—especially if it could be construed as real and the subject matter involves sensitive subjects like politics, health, finance, and current news. If the account doesn’t identify AI content, it could be frozen, demonetized, or banned. But those penalties exist almost entirely on paper. In practice, enforcement is difficult, partly because detecting AI content is getting more difficult by the day. Most state-of-the-art image generators are light-years ahead of the models that created the first “Will Smith eating spaghetti” video, and telltale artifacts like extra fingers and disappearing background characters have largely become a thing of the past. Without watermarks, even automated systems have a difficult time parsing AI images from real ones just by looking at them. The ‘nutrition label’ that keeps getting lost A new standard was supposed to fix this. Content Credentials are a way to track how an image was created and modified throughout its life cycle. That information can be preserved in the image’s metadata, so the site displaying it can more easily tell whether it’s AI-generated, potentially passing on a label or warning to the user. The idea is that, as you scroll your social feed, any image would have a tiny icon next to it that would reveal its history when clicked. However, even though this technology has existed for years and ostensibly has the support of major tech companies such as Adobe, Google, and Nvidia, social platforms haven’t adopted it consistently. Seeing the label is rare, and a Washington Post report found that social networks often strip out the metadata that enables Content Credentials. This isn’t necessarily nefarious—it follows a best practice from the early days of the web when every byte was precious. But the fact that it’s still happening shows there is little enthusiasm to make the system work. Would a label make any difference? Emily’s creator says he believes many of his followers didn’t care whether the images he was posting were AI or not. That may be true for some, but data suggest labels can alter people’s propensity to engage with AI content. A 2024 study found that labels on AI-manipulated media reduced belief in the claims. The study also found that wording matters: “manipulated” or “false” were more impactful than process-based labels alone. In other words, labels help, but weak labels help weakly. A buried “AI info” tag is not the same as a clear warning that an image might depict a person who does not exist. Platforms like Facebook, Instagram, YouTube, and TikTok already process and modify content at scale. They’ve spent two decades building the art of detecting copyright violations, nudity, spam, and engagement signals. It is hard to believe they are incapable of building a clearer label for AI-generated people. It’s the incentives, stupid So why don’t they? The uncomfortable answer is that the incentives point the other way. While platforms want to keep bad content out, they are more motivated to keep people posting, scrolling, sharing, and buying. AI-generated material fits neatly into that machine because it is cheap to make, easy to personalize and highly compatible with engagement-driven feeds. Mark Zuckerberg has been unusually direct about this, describing AI-generated material as “a whole new category of content” that he sees as important for Facebook, Instagram and Threads. That doesn’t mean Meta or any other platform wants deception (which, again, is a subcategory of AI content). But it does mean the companies have a business reason to welcome more synthetic content, and making the labels too strong or too visible could dampen the engagement they’re trying to encourage. The calculus could change, though. Europe’s AI Act includes transparency obligations for deepfakes and certain AI-generated public-interest content, with related rules taking effect this year. Should platforms start to rack up major fines for poor labeling, things could change in a hurry. Advertiser pressure would help, too, since appearing next to deceptive content is bad for business. Finally, and crucially, there’s audience behavior: if users begin to feel like they can’t trust what they’re seeing on a network, they might, over time, stop engaging with that network. The burden has shifted Right now, the responsibility for detecting AI content is falling largely on the user, with the social platforms not prioritizing the technical progress that might help, and regulators only beginning to act. And you might question what’s the point—many of Emily’s followers no doubt knew she was virtual but followed, engaged, and maybe even forked over some money anyway. However, that choice—to engage or not with a virtual influencer is robbed from you if you don’t know it’s virtual in the first place. The technology industry has spent years presenting provenance as a central answer to synthetic media. Adobe, Microsoft, Meta, OpenAI, Google and others have backed standards, joined coalitions, made public commitments and embedded Content Credentials into their tools. Fine. Then show it to people. Make it visible before the share, before the follow, before the subscription, before the merchandise purchase. Because if the only way to learn that an influencer is fake is to wait for a magazine investigation, the disclosure system has already failed.
Fast Company
Coverage and analysis from United States of America. All insights are generated by our AI narrative analysis engine.