‘The Devil Wears Prada’ has an important lesson for AI skeptics 
0
Politics

‘The Devil Wears Prada’ has an important lesson for AI skeptics 

April 20, 2026
Scroll

Posted 3 hours ago by

About 20 minutes into the Devil Wears Prada—the 2006 David Frankel film that constitutes one of the most important and perfect films ever produced (please hold all dissent)—Meryl Streep delivers a critical speech to Anne Hathaway that encompasses the plot’s primary tension. The moment, which may come up in the sequel (an Instagram post from a professional dyeing service in New York suggests this may be the case), comes as Streep’s Miranda, the frigid chief editor of a top fashion magazine is pondering items that might be featured for an upcoming issue, while surrounded by her stressed-out underlings.

‘The Devil Wears Prada’ has an important lesson for AI skeptics 

Also in the office is Andie (Hathaway), a comparatively disheveled new assistant who has somehow landed a coveted role at the esteemed Vogue stand-in, diligently taking notes on the runthrough. When an underling presents two blue belts to Miranda for consideration and notes that it’s tough to choose between them, Andie snortles and says, “Both of those belts look exactly the same to me. I’m still learning about this stuff.” This, of course, is precisely the wrong thing to say. Up until this point in the movie, Andie has hid, very poorly and under the guise of bubbly unfamiliarity, that she thinks the fashion industry is vain and stupid. Miranda, intelligent and ever-perceptive, has picked up on Andie’s covert derision, and uses her blue belt faux pas to delve into a crisp and critical evisceration of Andie’s attitude. You can just watch this scene online, but Streep’s dismantling of her assistant is summarized below. That “lumpy blue sweater,” she explains, is not just blue but cerulean, a color that traveled from designer collections, from Oscar de la Renta to Yves Saint Laurent, down through the market until it landed, inevitably, in Andie’s closet. What Andie shrugs off as “stuff” is a system she already participates in, albeit passively—and it’s one that generates countless jobs and millions of dollars. “It’s sort of comical how you think that you’ve made a choice that exempts you from the fashion industry,” Miranda tells a now totally silent and humbled Andie, “when, in fact, you’re wearing a sweater that was selected for you by the people in this room from a pile of ‘stuff.’” Andie’s implicit position, throughout the film, is that while she’s interning at a fashion magazine, she sort of thinks the industry is silly, even stupid, and she’s a reluctant—though perky—observer, not a participant. Miranda, of course, thinks the fashion industry is all there is. But that’s not Miranda’s point. Her point is that we all wear clothes. To think you’re not somehow not participating in fashion, however saintly or nefarious, is inane. Here’s where AI comes in. Now, a cerulean belt is not a large language model, and Miranda is not Sam Altman, but the scene is illustrative of a reflex by some to believe that they can simply excise the influence of a billion-dollar industry on their lives—and then feel morally superior for it. A small but loud community of AI skeptics is taking the position, like Andie, that AI is not something they’re participating in, sort of stupid, and even something to look down on. This community (which tends to thrive on Bluesky) seem to believe that AI ranges from silly, uncool, stupid, and most importantly, something ignorable that they are not using. AI will either be very good or very bad for humanity, but there’s probably no universe where AI is just a silly and vapid nothingburger we can roll our eyes at and ridicule. This has even emerged into a sort of purity test, where it’s become common (among some) to believe that using AI is like a personal flaw and nothing more than dimwit tomfoolery. Even in more thought-through circles, there’s a developing emphatic sense of moral outrage over the use of AI, and urging technological renunciation instead. One problem with this attitude is that it falls into the well-worn trap of an abstinence-only approach, pushing people to restrain themselves from making a poor choice when the deluge of social pressures and personal desire shepherd toward making that exact choice. Lest we forget that most of us are workers, and many people will use AI because their bosses tell them to, not, like, for fun. (Indeed, a Gallup poll recently found half of all workers in the US now use AI). This approach also suggests that a consumer choice is the solvent to a systemic threat. Calls for ardent vegetarianism, and urging people to flick off the lights when we leave the house, did not solve climate change (which is ongoing). The same is and will be true of AI and the misguided social trend that seems to hinge on shaming people who might do so. Miranda’s monologue, though, illustrates a second problem with this line of thinking. Yes, you can decide not to use ChatGPT, and perhaps this might give you a momentary feeling of organic cognition, free of AI’s influence. And that might be worth it, alone, for preserving your ability to think clearly. But know the internet is already polluted with the output of large language models, and that you are imbibing this output everyday. It is true that you do not need to personally pay for a subscription to Claude, but the architecture of our digital system means that large language models are already a rank-and-file feature of email software, customer service bots, in media production, and so much more. ChatGPT and search engines will eventually converge into the same thing, and they are, in fact, racing to the finish line to do so. AI is reshaping our energy production systems and our politics. The question is not whether you’ll have the soup, but whether you realize you, and the rest of us, are already swimming in the soup. Considered another way, this approach appears to be the equivalent of sticking your head in the sand when the very challenge you are facing is a sandstorm. Inconveniently, systematic threats require systemic solutions, not performing purity politics. If your revulsion is to AI is that it’s corrupting our ability to think independently—which it definitely is—ridiculing those who use the consumer version of ChatGPT is a very small and more importantly ineffectual hill to die on.

Fast Company
Fast Company

Coverage and analysis from United States of America. All insights are generated by our AI narrative analysis engine.

United States of America
Bias: lean left

People's Voices (0)

Leave a comment
0/500
Note: Comments are moderated. Please keep it civil. Max 3 comments per day.
You might also like

Explore More