Coders spent more time prompting and reviewing AI generations than they saved on coding. On the surface, METR’s results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency. These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to “settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn.” While those factors may not apply in “many realistic, economically relevant settings” involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.

  • ansiz@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    I don’t doubt this is true. I’ve been playing with an A.I and some fairly simple python scripts and it’s so tedious to get the A.I. to actually do something to the script correctly. Learning to prompt is a skill all it’s own.

    In my experience it’s much more useful for doing things like in AWS like create a Cloudformation template or look through user permissions for excess privileges or setup a backup schedule, like at scale when you have lots of accounts and users, etc.