We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.

Then retrain on that.

Far too much garbage in any foundation model trained on uncorrected data.

Source.

More Context

Source.

Source.

  • maxfield@pf.z.org
    link
    fedilink
    English
    arrow-up
    160
    arrow-down
    2
    ·
    21 hours ago

    The plan to “rewrite the entire corpus of human knowledge” with AI sounds impressive until you realize LLMs are just pattern-matching systems that remix existing text. They can’t create genuinely new knowledge or identify “missing information” that wasn’t already in their training data.

    • WizardofFrobozz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      7 hours ago

      To be fair, your brain is a pattern-matching system.

      When you catch a ball, you’re not doing the physics calculations in your head- you’re making predictions based on an enormous quantity of input. Unless you’re being very deliberate, you’re not thinking before you speak every word- your brain’s predictive processing takes over and you often literally speak before you think.

      Fuck LLMs- but I think it’s a bit wild to dismiss the power of a sufficiently advanced pattern-matching system.

        • zqps@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          ·
          15 hours ago

          Yes.

          He wants to prompt grok to rewrite history according to his worldview, then retrain the model on that output.

        • MajinBlayze@lemmy.world
          link
          fedilink
          English
          arrow-up
          23
          ·
          edit-2
          18 hours ago

          Try rereading the whole tweet, it’s not very long. It’s specifically saying that they plan to “correct” the dataset using Grok, then retrain with that dataset.

          It would be way too expensive to go through it by hand

    • zildjiandrummer1@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      13
      ·
      18 hours ago

      Generally, yes. However, there have been some incredible (borderline “magic”) emergent generalization capabilities that I don’t think anyone was expecting.

      Modern AI is more than just “pattern matching” at this point. Yes at the lowest levels, sure that’s what it’s doing, but then you could also say human brains are just pattern matching at that same low level.

      • queermunist she/her@lemmy.ml
        link
        fedilink
        English
        arrow-up
        17
        ·
        18 hours ago

        Nothing that has been demonstrated makes me think these chatbots should be allowed to rewrite human history what the fuck?!

        • zildjiandrummer1@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          That’s not what I said. It’s absolutely dystopian how Musk is trying to tailor his own reality.

          What I did say (and I’ve been doing AI research since the AlexNet days…) is that LLMs aren’t old school ML systems, and we’re at the point that simply scaling up to insane levels has yielded results that no one expected, but it was the lowest hanging fruit at the time. Few shot learning -> novel space generalization is very hard, so the easiest method was just take what is currently done and make it bigger (a la ResNet back in the day).

          Lemmy is almost as bad as reddit when it comes to hiveminds.