- AI But Easy
- Posts
- AI made these developers slower
AI made these developers slower
And how not to be one of them
Shoppers are adding to cart for the holidays
Over the next year, Roku predicts that 100% of the streaming audience will see ads. For growth marketers in 2026, CTV will remain an important “safe space” as AI creates widespread disruption in the search and social channels. Plus, easier access to self-serve CTV ad buying tools and targeting options will lead to a surge in locally-targeted streaming campaigns.
Read our guide to find out why growth marketers should make sure CTV is part of their 2026 media mix.
Everyone says AI makes you faster. The data says otherwise.
A rigorous study from METR tested 16 experienced developers on 246 real tasks in codebases they'd worked on for 5+ years. Using tools like Cursor Pro and Claude Sonnet.
The result? Developers using AI took 19% longer to complete tasks.
The twist? Those same developers believed they were 20% faster.
Their brains lied to them.
Why this matters
This isn't another "AI bad" take. It's a pattern recognition problem.
The study found that AI helps most when:
You're unfamiliar with the codebase
The task involves boilerplate or research
You're exploring, not executing
AI hurts most when:
You already know the codebase deeply
The task requires institutional knowledge
You're an expert doing expert-level work
The developers in the study had an average of 5 years and 1,500 commits on their codebases. They were already fast. AI couldn't keep up.
Where the time actually goes
The screen recordings revealed how AI-assisted work differs:
Without AI | With AI |
|---|---|
Write code | Write prompt |
Debug | Wait for generation |
Ship | Review AI output |
Fix AI mistakes | |
Re-prompt |
Developers spent 9% of their time reviewing and cleaning AI outputs. Another 4% just waiting for responses.
For experts, this overhead exceeded any time saved.
The perception gap
Before the study: developers predicted AI would save them 24% time.
After the study: developers estimated AI saved them 20% time.
Reality: AI cost them 19% more time.
Why the disconnect?
AI work feels easier. Less cognitive load, more delegation. Your brain interprets "easier" as "faster."
More idle time. Screen recordings showed more periods of no activity with AI. Developers zoned out more.
Confirmation bias. When AI helps on one task, you remember it. When it wastes time, you blame the task.
5 techniques that actually work
Based on what high-performers do differently:
1. Use AI for unfamiliar territory only
If you could write it faster than you could explain it, just write it. Save AI for code you'd have to Google anyway.
2. Time-box your prompts
Set a 2-minute rule. If you can't get useful output in 2 minutes of prompting, switch to manual coding. Don't enter the re-prompt death spiral.
3. Start with the hard parts
Skeptics use AI for complex tasks (where it fails). Enthusiasts use it for simple tasks (where it succeeds). Be strategic: use AI for medium-complexity tasks where context isn't critical.
4. Write tests first, then let AI implement
Give AI a clear target. "Make these tests pass" is better than "build this feature." Measurable success criteria = better AI output.
5. Track your actual time
Mike Judge, a developer who saw the METR study, ran his own experiment: flipping a coin to decide AI vs manual for each task over 6 weeks.
His result: AI slowed him down 21%.
You won't know your number until you measure it.
The bottom line
AI coding tools aren't useless. They're context-dependent.
For junior developers learning new codebases: huge help.
For experts on familiar projects: often a tax.
For everyone: the productivity boost has a steeper learning curve than anyone expected.
The winners aren't the ones who use AI most. They're the ones who know when not to.
Catch you tomorrow.
If this changed how you think about AI coding tools, share it with a developer who's frustrated that Cursor isn't making them 10x faster.


Reply