It’s not an assumption it’s just a matter of practical reality. If we’re at best a decade off from that point why pretend it could suddenly unexpectedly improve to the point it’s unrecognizable from its current state? LLMs are neat, scientists should keep working on them and if it weren’t for all the nonsense “Ai” hype we have currently I’d expect to see them used rarely but quite successfully as it would be getting used off of merit, not hype.
It’s not an assumption it’s just a matter of practical reality. If we’re at best a decade off from that point why pretend it could suddenly unexpectedly improve to the point it’s unrecognizable from its current state? LLMs are neat, scientists should keep working on them and if it weren’t for all the nonsense “Ai” hype we have currently I’d expect to see them used rarely but quite successfully as it would be getting used off of merit, not hype.