

Maybe I am over selling current AI and underselling our brains. But the way I see it is that the exact mechanism that allowed intelligence to flourish within ourselves exists with current nural networks. They are nowhere near being AGI or UGI yet but I think these tools alone are all that are required.
The way I see it is, if we rewound the clock far enough we would see primitive life with very basic nural networks beginning to develop in existing multicellular life (something like jellyfish possibly). These nural networks made from neurons neurotransmitters and synapses or possibly something more primitive would begin forming the most basic of logic over centuries of evolution. But it wouldn’t reassemble anything close to reason or intelligence, it wouldn’t have eyes, ears or any need for language. At first it would probably spend its first million years just trying to control movement.
We know that this process would have started from nothing, nural networks with no training data, just a free world to explore. And yet over 500 million years later here we are.
My argument is that modern nural networks work the same way that biological brains do, at least the mechanism does. The only technical difference is with neurotransmitters and the various dampening and signal boosting that can happen along with nuromodulation. Given enough time and enough training, I firmly believe nural networks could develop reason. And given external sensors it could develop thought from these input signals.
I don’t think we would need to develop a consciousness for it but that it would develop one itself given enough time to train on its own.
A large hurdle that might arguably be a good thing, is that we are largely in control of the training. When AI is used it does not learn and alter itself, only memorising things currently. But I do remember a time when various AI researchers allowed earlier models to self learn, however the internet being the internet, it developed some wildly bad habits.
We’re on the same page about consciousness then. My original comment only pointed out that current AI have problems that we have because they replicate how we work and it seems that people don’t like recognising that very obvious fact that we have the exact problems that LLMs have. LLMs aren’t rational because we inherently are not rational. That was the only point I was originally trying to make.
For AGI or UGI to exist, massive hurdles will need to be made, likely an entire restructuring of it. I think LLMs will continue to get smarter and likely exceed us but it will not be perfect without a massive rework.
Personally and this is pure speculation, I wouldn’t be surprised if AGI or UGI is only possible with the help of a highly advanced AI. Similar to how microbiologist are only now starting to unravel protein synthesis with the help of AI. I think the shear volume of data that needs processing requires something like a highly evolved AI to understand, and that current technology is purely a stepping stone for something more.