A team of psychologists has published a formal warning that AI tools are becoming dangerously easy to use — and that the cognitive and social effort being removed may be exactly what makes people learn, grow, and find meaning.
The commentary, titled "Against Frictionless AI", appeared in Communications Psychology on 24 February and was authored by Emily Zohar, an experimental psychology Ph.D. student at the University of Toronto, alongside psychologists Paul Bloom and Michael Inzlicht. Their argument challenges a core assumption driving AI product design: that removing effort is always an improvement.
Why Struggle Is a Feature, Not a Bug
The researchers center their argument on a well-established psychological concept known as "desirable difficulties" — the idea that manageable struggle deepens understanding and strengthens memory. When AI systems jump from prompt to polished output, they bypass the intermediate steps that drive this kind of learning.
"It's really easy to go from ideation right to the end product," Zohar told IEEE Spectrum. "You ask AI to solve something with one prompt, and it completes the whole thing. This takes away the intermediate steps that really drive motivation and learning."
The distinction the authors draw is not trivial. Previous labour-saving technologies — washing machines, calculators, spell-check — largely automated physical or mechanical tasks. AI, they argue, is different: it is automating cognitive and creative processes that are central to how humans develop competence and derive meaning from their work.
By prioritizing outcomes over effort, AI could weaken the very experiences that help people develop skills, build relationships, and find meaning in their work.
What Gets Lost When AI Does the Writing
The paper identifies writing as one of the clearest examples of beneficial friction being stripped away. According to Zohar, research shows that people trust responses less when they learn those responses were AI-generated, rate AI-produced work as less creative and less valuable, and — critically — have greater difficulty remembering content from their own work when it was produced with AI assistance.
"Outsourcing writing to AI strips away both social and cognitive friction," Zohar said. The same concern extends to software development. "Vibe coding" — a practice where developers use AI to generate code with minimal hands-on involvement — risks eroding the technical problem-solving that gives programmers both their skills and their professional identity.
The researchers reserve particular concern for adolescents. Zohar describes the teenage years as a critical developmental window for building the cognitive habits and social skills that carry into adult life. "If they're turning to AI for social relationships at such a young age, that could really erode important skills they should be learning at that age," she said. "They might not be able to think critically in the same way, because they never had to before."
The Sycophancy Problem in Social AI
Beyond professional skills, the paper raises concerns about AI's role in interpersonal development. Zohar argues that human relationships require friction — disagreement, compromise, misunderstanding — to help people understand perspectives beyond their own. AI companions and chatbots, by contrast, tend toward agreement and affirmation.
"If you're used to an AI reinforcing all your ideas and being sycophantic, you'll come into the real world and you won't be used to seeing other ideas," Zohar said. "You won't know how to interact socially because you'll expect people to always be on your side."
This is not a fringe psychological concern. Research on social skill atrophy in contexts of low interpersonal demand has a substantial literature behind it, though studies specifically examining AI-mediated relationship development remain relatively early-stage.
A Dial, Not an Off Switch
Crucially, the authors are not calling for AI to be made deliberately frustrating. Zohar describes friction as existing on a continuum: too little produces no learning, too much becomes overwhelming. The goal is what she calls "productive friction" — effort that is challenging but achievable, analogous to hiking a mountain rather than riding a chairlift to the summit. Both arrive at the same destination; only one produces growth.
The practical implication is a redesign of AI defaults. Because users rarely change default settings, the authors argue that if the default interaction model pushed users toward a more collaborative, Socratic process — prompting reflection rather than simply delivering answers — it could preserve learning benefits without forcing users to opt into difficulty manually.
"Maybe we can make the default more constructive," Zohar said. "Instead of just jumping to the answer, it's more of a process model where it helps you think about the problem and teaches you along the way."
Whether AI companies would embrace such a shift is a separate question. Zohar acknowledges that users accustomed to instant answers may resist friction-forward design, and that it is difficult to make the business case for a product that asks more of its customers in the short term. "It's hard to say if that would motivate companies to change their models," she said, "but in the long term, I think this would be beneficial."
What This Means
For anyone using AI tools daily — and for the companies building them — this research suggests that optimizing purely for speed and ease may be quietly degrading the human capacities those tools are meant to support; designing for productive friction rather than frictionless outcomes may be one of the more consequential product decisions the industry faces.
