Coders spent more time prompting and reviewing AI generations than they saved on coding. On the surface, METR’s results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency. These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to “settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn.” While those factors may not apply in “many realistic, economically relevant settings” involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.
I was talking mostly about side projects. I don’t have much time for them right now. Thanks to LLMs, I can spend those few hours a week on doing instead of reading what is the best way to do X in ever changing world of web front-end frameworks. I just sit down, ask: “how is it usually done?”, tweak it a bit and finish.
Example: I have published an app on flathub a while ago. Doing it from scratch is damn complicated. “Screw it” is what I would say in pre LLMs era after a few hours ;)