LOOK MAA I AM ON FRONT PAGE

  • SoftestSapphic@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    29 days ago

    Wow it’s almost like the computer scientists were saying this from the start but were shouted over by marketing teams.

  • minoscopede@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    29 days ago

    I see a lot of misunderstandings in the comments 🫤

    This is a pretty important finding for researchers, and it’s not obvious by any means. This finding is not showing a problem with LLMs’ abilities in general. The issue they discovered is specifically for so-called “reasoning models” that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

    Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that’s a flaw that needs to be corrected before models can actually reason.

  • surph_ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.