Coders spent more time prompting and reviewing AI generations than they saved on coding. On the surface, METR’s results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency. These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to “settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn.” While those factors may not apply in “many realistic, economically relevant settings” involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.

  • bulwark@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    I’m not really sure why it was such a small sample size. It definitely casts doubt on some of their conclusions. I also have issues with some methodology used. I think a better study that came out a week or two ago was the one that showed visible neurological decline from AI use.