We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • hera@feddit.uk
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    19
    ·
    1 day ago

    Philosophers are so desperate for humans to be special. How is outputting things based on things it has learned any different to what humans do?

    We observe things, we learn things and when required we do or say things based on the things we observed and learned. That’s exactly what the AI is doing.

    I don’t think we have achieved “AGI” but I do think this argument is stupid.

    • NotASharkInAManSuit@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 hour ago

      Pointing out that humans are not the same as a computer or piece of software on a fundamental level of form and function is hardly philosophical. It’s just basic awareness of what a person is and what a computer is. We can’t say at all for sure how things work in our brains and you are evangelizing that computers are capable of the exact same thing, but better, yet you accuse others of not understanding what they’re talking about?

    • ArbitraryValue@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      edit-2
      1 day ago

      Yes, the first step to determining that AI has no capability for cognition is apparently to admit that neither you nor anyone else has any real understanding of what cognition* is or how it can possibly arise from purely mechanistic computation (either with carbon or with silicon).

      Given the paramount importance of the human senses and emotion for consciousness to “happen”

      Given? Given by what? Fiction in which robots can’t comprehend the human concept called “love”?

      *Or “sentience” or whatever other term is used to describe the same concept.

      • hera@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        5
        ·
        14 hours ago

        This is always my point when it comes to this discussion. Scientists tend to get to the point of discussion where consciousness is brought up then start waving their hands and acting as if magic is real.

        • ArbitraryValue@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          10 hours ago

          I haven’t noticed this behavior coming from scientists particularly frequently - the ones I’ve talked to generally accept that consciousness is somehow the product of the human brain, the human brain is performing computation and obeys physical law, and therefore every aspect of the human brain, including the currently unknown mechanism that creates consciousness, can in principle be modeled arbitrarily accurately using a computer. They see this as fairly straightforward, but they have no desire to convince the public of it.

          This does lead to some counterintuitive results. If you have a digital AI, does a stored copy of it have subjective experience despite the fact that its state is not changing over time? If not, does a series of stored copies representing, losslessly, a series of consecutive states of that AI? If not, does a computer currently in one of those states and awaiting an instruction to either compute the next state or load it from the series of stored copies? If not (or if the answer depends on whether it computes the state or loads it) then is the presence or absence of subjective experience determined by factors outside the simulation, e.g. something supernatural from the perspective of the AI? I don’t think such speculation is useful except as entertainment - we simply don’t know enough yet to even ask the right questions, let alone answer them.

          • hera@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 hours ago

            I am more talking about listening to and reading scientists in media. The definition of consciousness is vague at best

    • aesthelete@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      5
      ·
      edit-2
      1 day ago

      How is outputting things based on things it has learned any different to what humans do?

      Humans are not probabilistic, predictive chat models. If you think reasoning is taking a series of inputs, and then echoing the most common of those as output then you mustn’t reason well or often.

      If you were born during the first industrial revolution, then you’d think the mind was a complicated machine. People seem to always anthropomorphize inventions of the era.

      • kibiz0r@midwest.social
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        1 day ago

        If you were born during the first industrial revolution, then you’d think the mind was a complicated machine. People seem to always anthropomorphize inventions of the era.

      • chunes@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        23 hours ago

        Do you think most people reason well?

        The answer is why AI is so convincing.

      • FourWaveforms@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        7
        ·
        1 day ago

        When you typed this response, you were acting as a probabilistic, predictive chat model. You predicted the most likely effective sequence of words to convey ideas. You did this using very different circuitry, but the underlying strategy was the same.

        • NotASharkInAManSuit@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          By this logic we never came up with anything new ever, which is easily disproved if you take two seconds and simply look at the world around you. We made all of this from nothing and it wasn’t a probabilistic response.

          Your lack of creativity is not a universal, people create new things all of the time, and you simply cannot program ingenuity or inspiration.

            • aesthelete@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              edit-2
              4 hours ago

              Dude chatbots lie about their “internal reasoning process” because they don’t really have one.

              Writing is an offshoot of verbal language, which during construction for people almost always has more to do with sound and personal style than the popularity of words. It’s not uncommon to bump into individuals that have a near singular personal grammar and vocabulary and that speak and write completely differently with a distinct style of their own. Also, people are terrible at probabilities.

              As a person, I can also learn a fucking concept and apply it without having to have millions of examples of it in my “training data”. Because I’m a person not a fucking statistical model.

              But you know, you have to leave your house, touch grass, and actually listen to some people speak that aren’t talking heads on television in order to discover that truth.

              • stephen01king@lemmy.zip
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                3 hours ago

                Is that why you love saying touch grass so much? Because it’s your own personal style and not because you think it’s a popular thing to say?

                Or is it because you learned the fucking concept and not because it’s been expressed too commonly in your “training data”? Honestly, it just sounds like you’ve heard too many people use that insult successfully and now you can’t help but probabilistically express it after each comment lol.

                Maybe stop parroting other people and projecting that onto me and maybe you’d sound more convincing.

                • aesthelete@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  edit-2
                  3 hours ago

                  Is that why you love saying touch grass so much? Because it’s your own personal style and not because you think it’s a popular thing to say?

                  In this discussion, it’s a personal style thing combined with a desire to irritate you and your fellow “people are chatbots” dorks and based upon the downvotes I’d say it’s working.

                  And that irritation you feel is a step on the path to enlightenment if only you’d keep going down the path. I know why I’m irritated with your arguments: they’re reductive, degrading, and dehumanizing. Do you know why you’re so irritated with mine? Could it maybe be because it causes you to doubt your techbro mission statement bullshit a little?

                  • stephen01king@lemmy.zip
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    2 hours ago

                    Who’s a techbro, the fact that you can’t even have a discussion without resorting to repeating a meme two comments in a row and accusing someone with a label so you can stop thinking critically is really funny.

                    Is it techbro of me to think that pushing AI into every product is stupid? Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines? You say I’m being reductive, degrading, and dehumanising, but that’s all simply based on your insecurity.

                    I was simply being realistic based on the little we know of the human brain and how it works, it is pretty much that until we discover this special something that makes you think we’re better than other neural networks. Without this discovery, your insistence is based on nothing more than your own desire to feel special.

    • counterspell@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      7
      ·
      1 day ago

      No it’s really not at all the same. Humans don’t think according to the probabilities of what is the likely best next word.

    • middlemanSI@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      7
      ·
      1 day ago

      Most people, evidently including you, can only ever recycle old ideas. Like modern “AI”. Some of us can concieve new ideas.

        • middlemanSI@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          13 hours ago

          Wdym? That depends on what I’m working on. For pressing issues like raising energy consumption, CO2 emissions and civil privacy / social engineering issues I propose heavy data center tarrifs for non-essentials (like “AI”). Humanity is going the wrong way on those issues, so we can have shitty memes and cheat at school work until earth spits us out. The cost is too damn high!

          • hera@feddit.uk
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            8 hours ago

            What do you mean what do I mean? You were the one that said about ideas in the first place…

            • aesthelete@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              3 hours ago

              If you don’t think humans can conceive of new ideas wholesale, then how do you think we ever invented anything (like, for instance, the languages that chat bots write)?

              Also, you’re the one with the burden of proof in this exchange. It’s a pretty hefty claim to say that humans are unable to conceive of new ideas and are simply chatbots with organs given that we created the freaking chat bot you are convinced we all are.

              You may not have new ideas, or be creative. So maybe you’re a chatbot with organs, but people who aren’t do exist.

          • stephen01king@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            4
            ·
            11 hours ago

            And is tariffs a new idea or something you recycled from what you’ve heard before about tariffs?