The Lazy Genius: How AI Boosts Us and Bums Us Out



Picture this: A college student is cramming for an exam, a marketer is drafting a pitch, and a writer is brainstorming ideas for their next big project. Each turns to an AI assistant, and within seconds, they have insights that would have otherwise taken hours to compile. They feel like productivity powerhouses—part superhero, part sorcerer. And they’re not alone. According to a new report from the Imagining the Digital Future Center, 54 percent of American adults using large language models (LLMs) say these tools boost their productivity, while 42 percent claim they enhance creativity.

But as they sit back, reveling in AI-powered efficiency, a nagging thought emerges: Is this making me lazy? Turns out, they’re in good company. Half of LLM users say they’ve felt lazy when relying on AI, and 35 percent have even felt like they were cheating.

This push-pull—empowerment vs. unease—is the defining paradox of AI augmentation. We’re soaring, but we’re squirming. Why? And what does that say about the way we relate to intelligence, both artificial and our own?

The Empowerment High: Why AI Feels Like a Superpower

First, let’s talk about why people love these tools. That 54 percent productivity boost isn’t just a feel-good number; LLMs are tireless assistants, breezing through research, drafting, and brainstorming at warp speed. Think of them as interns who never take coffee breaks, only they’re infinitely faster and don’t roll their eyes when asked for another revision.

And creativity? Around 42 percent of users say AI enhances their ability to generate new ideas. It’s like having a brainstorming partner who can riff endlessly without ever getting tired. It also explains why 51 percent of users rely on LLMs for personal, informal learning—twice as many as those using them for work (24 percent). The report shows that people aren’t just using AI to grind through tasks; they’re using it to explore, to dream, to expand their intellectual horizons.

For some, this relationship is more than transactional. Approximately 65 percent of LLM users have engaged in spoken, back-and-forth interactions with their AI assistants, with 34 percent doing so multiple times a week. Users aren’t just consulting AI, they’re conversing with it. And in those exchanges, something interesting happens—49 percent of users think their AI assistant is smarter than they are.

When people feel smarter because of AI, they bask in a sense of mastery—often called self-efficacy. That rush of competence fuels the drive to keep learning, keep experimenting, keep pushing their own limits. It can be invigorating. But every superpower has a shadow side.

When AI Feels Like a Guilt-Ridden Shortcut

Then comes the unease. That creeping feeling that maybe people aren’t earning their insights. Half of LLM users (50 percent) report feeling lazy when using AI, and 35 percent have outright felt like they were cheating.

Why? One explanation lies in effort justification, a psychological principle that says we value things more when we work hard for them. When AI hands someone an answer in seconds, it short-circuits that effort. The result? A gnawing sense that they’ve skipped the struggle and, in doing so, lost some intangible quality of “authentic” achievement.

Around 33 percent of users worry about becoming too dependent on LLMs, fearing they’re outsourcing their brains. That fear isn’t unfounded—23 percent of users report making a significant mistake or bad decision because they relied too much on AI. It’s the double-edged sword of trust: AI makes people faster, but sometimes at the cost of discernment.

The guilt is more than an abstract moral twinge; it can even reflect something of an identity crisis. If the best ideas are coming from an AI boost, is the user still the genius? Or just the lazy operator of a very smart machine?

The Paradox of Trust, Doubt, and the Digital Self

So why does AI make people feel both unstoppable and uneasy? At its core, this paradox is about agency. LLMs sit at the crossroads of two fundamental psychological forces.

  • Intrinsic motivation (me)—the joy of figuring things out for oneself.
  • Extrinsic efficiency (them)—the appeal of getting things done faster and better.

When LLMs enhance productivity and creativity, they feed the need for competence and self-efficacy. But when they hand out answers without effort, they trigger cognitive dissonance—the friction between wanting to be the hero of one’s own intelligence story and loving the convenience of a super-powered sidekick.

It also may explain why 25 percent of users say their AI assistant “cheers them up”—because they aren’t just using these tools, they’re engaging with them. Yet 35 percent also report feeling frustrated or confused, mirroring the push-pull of any close relationship.

And perhaps that’s the key. AI isn’t just a tool, it’s a mirror. It reflects back ambitions, insecurities, and evolving definitions of intelligence. The more human-like LLMs feel—with 32 percent of users saying their AI has a sense of humor and 25 percent believing it makes moral judgments—the more people wrestle with what that means for their own cognitive identity.

The author of the new report puts it this way:

“These tools are becoming deeply woven into daily life, not just as productivity boosters but as something more personal, shaping emotions, decisions, and even self-perception. Early adopters are navigating this paradox with a mix of excitement and uncertainty. Many feel hopeful about AI-driven medical advances and believe humans will remain in control. At the same time, they worry about the broader ripple effects, from social and economic upheavals to the challenge of redefining meaning and purpose in a world where machines can do so much. AI is not just changing the way we work. It is forcing us to rethink what it means to be human.” — Lee Rainie, Director of Imagining the Digital Future Center

The Work-in-Progress Digital Self

This report doesn’t tell us what to think; it just holds up a mirror to how we’re thinking. Half of American adults are now LLM users, and they’re navigating a messy mix of awe and ambivalence, productivity and paranoia, creativity and crisis.

Maybe that’s the real takeaway: The digital self is still evolving. AI doesn’t replace human intelligence; it reshapes how intelligence is experienced. And just as with any major cognitive shift—writing, printing, computing—people are still figuring out the balance.


Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts