

Before starting tasks, developers forecast that allowing AI will reduce completion time by 24%. After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%–AI tooling slowed developers down.
The gap and miss is insane.















Did these developers not have experience with AI?
I’m not sure focusing on one aspect to scope a reasonable and doable study automatically makes it “really low effort”.
If they were to test a range of project types, it’d have to be a much bigger study.