LOOK MAA I AM ON FRONT PAGE
Wow it’s almost like the computer scientists were saying this from the start but were shouted over by marketing teams.
I see a lot of misunderstandings in the comments 🫤
This is a pretty important finding for researchers, and it’s not obvious by any means. This finding is not showing a problem with LLMs’ abilities in general. The issue they discovered is specifically for so-called “reasoning models” that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.
Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that’s a flaw that needs to be corrected before models can actually reason.
You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.
Maybe you failed all your high school classes, but that ain’t got none to do with me.
Funny how triggering it is for some people when anyone acknowledges humans are just evolved primates doing the same pattern matching.