We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
Who’s a techbro, the fact that you can’t even have a discussion without resorting to repeating a meme two comments in a row and accusing someone with a label so you can stop thinking critically is really funny.
Is it techbro of me to think that pushing AI into every product is stupid? Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines? You say I’m being reductive, degrading, and dehumanising, but that’s all simply based on your insecurity.
I was simply being realistic based on the little we know of the human brain and how it works, it is pretty much that until we discover this special something that makes you think we’re better than other neural networks. Without this discovery, your insistence is based on nothing more than your own desire to feel special.
Yep, that’s a bingo!
Humans are absolutely more special than organic thinking machines. I’ll go a step further and say all living creatures are more special than that.
There’s a much more interesting discussion to be had than “humans are basically chatbot” but it’s this line of thinking that I find irritating.
If humans are simply thought processes or our productive output then once you have a machine capable of thinking similarly (btw chatbots aren’t that and likely never will be) then you can feel free to dispose of humanity. It’s a nice precursor to damning humanity to die so that you can have your robot army take over the world.
Show your proof, then. I’ve already said what I need to say about this topic.
We have no idea how humans think, yet you’re so confident that LLMs don’t and never will be similar? Are you the Techbro now, because you’re speaking so confidently on something that I don’t think can be proven at this moment. I typically associate that with Techbros trying to sell their products. Also, why are you talking about disposing humanity? Your insecurity level is really concerning.
Understanding how the human brain works is a wonderful thing that will let us unlock better treatment for mental health issues. Being able to understand them fully means we should also be able to replicate them to a certain extent. None of this involves disposing humans.
This is just more of you projecting your insecurity onto me and accusing me of doing things you fear. All I’ve said was that humans thoughts are also probabilistic based on the little we know of them. The fact that your mind wander so far off into thoughts about me justifying a robot army takeover of the world is just you letting your fear run wild into the realm of conspiracy theory. Take a deep breathe and maybe take your own advice and go touch some grass.
Much of the universe can be modeled as probabilities. So what? I can model a lot of things as different things. That does not mean that the model is the thing itself. Scientists are still doing what scientists do: being skeptical and making and testing hypotheses. It was difficult to prove definitively that smoking causes cancer yet you’re willing to hop to “human thought is just an advanced chatbot” on scant evidence.
No, it’s again a case of you buying the bullshit arguments of tech bros. Even if we had a machine capable of replicating human thought, humans are more than walking brain stems.
You want proof of that? Take a look at yourself. Are you a floating brain stem or being with limbs?
At even the most reductive and tech bro-ish, healthy humans are self-fueling, self-healing, autonomous, communicating, feeling, seeing, laughing, dancing, creative organic robots with GI built-in.
Even if a person one day creates a robot with all or most of these capabilities and worthy of considering having rights, we still won’t be the organic version of that robot. We’ll still be human.
I think you’re beyond having to touch grass. You need to take a fucking humanities course.
Not what I said, my point is that humans are organic probabilistic thinking machine and LLMs are just an imitation of that. And your assertion that an LLM is never ever gonna be similar to how the brain works is based on what evidence, again?
What the hell are you even rambling about? Its like you completely ignored my previous comment, since you’re still going on about robots.
Bro, don’t hallucinate an argument I never made, please. I’m only discussing about how the human mind works, yet here you are arguing about human limbs and what it means to be human?
I’m not interested in arguing against someone who’s more interested with inventing ghosts to argue with instead of looking at what I actually said.
And again, go take your own advice and maybe go to therapy or something.
Yeah, you reduced humans to probabilistic thinking machines with no evidence at all.
I didn’t assert that LLMs would definitely never reach AGI but I do think they aren’t a path to AGI. Why do I think that? Because they’ve spent untold billions of dollars and put everything they had into them and they’re still not anywhere close to AGI. Basic research is showing that if anything the models are getting worse.
Where’d you get the idea that you know how the human mind works? You a fucking neurological expert because you misinterpreted some scientific paper?
I agree there isn’t much to be gained by continuing this exchange. Bye!