
AI isn’t just reshaping productivity and threatening to kill jobs. It’s changing how we lead, communicate, and treat each other. It’s also creating a new gender gap
April 2, 2026
Fast Company
For nearly four years now, the conversation about generative AI has revolved almost exclusively around productivity, threatened jobs, automatable tasks, efficiency, and competitiveness. But there is a largely underestimated dimension to this revolution: its cultural effects. AI is not just transforming how we work; it is transforming how we are together, how we trust each other, how we communicate, and how we organize ourselves.

To measure this, it helps to borrow a framework from Erin Meyer, a professor at INSEAD whose book The Culture Map identifies eight dimensions along which the cultures of the world differ. Applied to artificial intelligence, Meyer’s eight dimensions reveal a series of cultural shifts that are more profound than we know. 1. How We Communicate: AI Is Training Us to Say What We Mean Generative AI demands clarity. An effective prompt is an explicit one. There’s no room for body language. This constraint is gradually reshaping how we communicate with each other, too. Cultures that have traditionally relied on what is left unsaid—where reading between the lines or sensing the mood in the room is a valued skill—are being pushed toward greater explicitness. As AI mediates more exchanges, the richness of implicit communication erodes. And there is the curious rehabilitation of the typo. For decades, a spelling mistake in a professional message was a sign of carelessness, even disrespect. Not anymore. A typo is increasingly read as proof that you wrote it yourself—that you took the time, that you cared enough to type it out without outsourcing the task. Imperfection has become a signal of authenticity. 2. How We Give Feedback: AI as a Cultural Mediator and Sugar-Coater Large language models are not built to be brutal. They begin by finding something to praise, soften their critiques, and close on a constructive note. After thousands of interactions with tools that say “great question” before correcting your mistake, even cultures accustomed to blunt, direct feedback begin absorbing a more diplomatic register. But AI also has a more positive effect on collective evaluation: It excels at finding the common denominator. In a multicultural team where some members practice direct feedback and others avoid confrontation entirely, AI can serve as a neutral translator—reformulating, synthesizing, and smoothing out cultural friction. 3. How We Persuade: It’s No Longer About the Argument. It’s About the Person Making It AI produces inductive responses: examples, bullet points, concrete cases. This results-first logic is gradually permeating cultures that traditionally valued deductive reasoning, like in France, for example, where the art of the dissertation (thesis, antithesis, synthesis) was a deep cultural marker. Presentations are getting shorter and more pragmatic. But the real shift goes beyond a simple victory of American-style storytelling over European-style argumentation. What is actually happening is that human embodiment is becoming the primary source of persuasion. When anyone can produce a well-structured argument in 10 seconds, formal argumentative quality stops being differentiating. What convinces people is presence, authenticity, and the personal commitment of the person speaking. 4. How We Lead: From the Lone Expert to the Collective Orchestrator The flattening of knowledge access generated by AI undermines leadership models built on the hoarding of expertise. The manager whose authority derived from mastery of a technical domain sees that competitive advantage eroding. Also, as AI becomes more pervasive, the very source of leadership becomes structurally more collective. AI models are trained on aggregated human work. They are, in a sense, the distillation of millions of anonymous contributions. To use AI is to mobilize a collective intelligence that no single person authored. This should dismantle the myth of the lone brilliant leader. Hence, the leadership of tomorrow may be more about collective discernment and knowing what to do with the output. 5. How We Decide: When the Algorithm Recommends, Do We Still Really Choose? AI compresses decision-making time. In seconds, it produces an analysis, a comparison, a recommendation. And increasingly, we rely on algorithmic recommendations, like HR scoring systems, sales prioritization tools, and project management assistants. Many decisions are made on our behalf. Often, we endorse them without examining them. In cultures that value collective consensus-building before any decision is made, this delegation can feel like a welcome relief. In cultures where strong unilateral decision-making is a mark of leadership, it produces a strange dispossession: The decisive executive finds himself rubber-stamping a recommendation he did not construct. Are we actually still deciding? 6. How We Trust: When All Outputs Look the Same, Relationships Become Everything Here is perhaps the most paradoxical reversal. One might have expected AI to strengthen trust based on the quality of work, since now anyone can produce polished, well-structured deliverables. Instead, the opposite is happening. When all outputs look alike, they lose their power to distinguish. Cognitive trust erodes precisely because AI has made it commonplace. What becomes valuable is the affective, the personal relationship, the two-hour lunch, the intimate conversation. Receiving a proposal that is manifestly generated without human effort sends a signal: You were not worth my real attention. As AI takes over routine interactions, what remains—genuine attention, real presence—acquires extraordinary value. We all crave sincere human contact. The affective dimension of trust is likely to become more precious than ever. 7. How We Disagree: The Risk of a World Where Everyone Agrees AI models avoid confrontation by design. They don’t flatly contradict. They “offer a complementary perspective.” They “acknowledge the nuance.” This algorithmically engineered softness, repeated at a massive scale, may be reshaping the norms of disagreement. In cultures already inclined to avoid open conflict, AI reinforces the tendency to sidestep. In cultures where direct disagreement is seen as healthy and productive, AI introduces a veneer of diplomatic language that can mask real tensions. The danger is organizations where everyone appears to agree—the humans out of politeness, the AIs out of design—and where real problems never surface. A world of frictionless AI-mediated communication will do away with the friction that makes organizations resilient. 8. How We Relate to Time: When a Two-Hour Response Feels Slow AI responds in seconds. That standard of immediacy, internalized across thousands of interactions, is reshaping our tolerance for human response time. A colleague who takes two hours to reply to an email now seems sluggish. A meeting that “takes time” to build consensus feels inefficient. The AI’s instantaneousness has become the invisible benchmark against which all human pace is judged. Cultures that already organize work sequentially and value strict scheduling are accelerating further. Cultures with a more fluid, relational relationship to time—where adaptability matters more than the clock—face growing pressure to conform to responsiveness standards that are foreign to them. The AI has, in effect, exported one particular cultural relationship to time and made it feel universal. The Overlooked Dimension: Gender and the Digital Matilda Effect The numbers are striking. Women are between 20 and 25 less likely than men to use generative AI tools, according to a Harvard Business School meta-analysis. Women are hesitating because they are calculating the risk of being seen using AI. A study found that when engineers submitted identical AI-assisted code for review, women received competence ratings 13 lower; men, only 6 lower. It’s a sort of digital Matilda effect. The historical Matilda effect is the phenomenon by which women’s intellectual contributions are attributed to their male colleagues. When a woman uses AI, observers tend to assume the tool did the thinking. When a man uses the same tool, he is credited with the strategic intelligence to deploy it well. Women who have spent careers navigating this double standard know how to read the room correctly. In thinking with machines, we are changing our codes, our expectations, our relationships, and our hierarchies. Perhaps it’s still too early to fully comprehend the cultural revolution induced by generative AI. But somewhere between the typos we now leave on purpose and the feedback we no longer dare to give, a deeper transformation is already underway—and we have barely begun to notice it.
Fast Company
Coverage and analysis from United States of America. All insights are generated by our AI narrative analysis engine.