We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.

Then retrain on that.

Far too much garbage in any foundation model trained on uncorrected data.

Source.

More Context

Source.

Source.

  • zildjiandrummer1@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    1 day ago

    That’s not what I said. It’s absolutely dystopian how Musk is trying to tailor his own reality.

    What I did say (and I’ve been doing AI research since the AlexNet days…) is that LLMs aren’t old school ML systems, and we’re at the point that simply scaling up to insane levels has yielded results that no one expected, but it was the lowest hanging fruit at the time. Few shot learning -> novel space generalization is very hard, so the easiest method was just take what is currently done and make it bigger (a la ResNet back in the day).

    Lemmy is almost as bad as reddit when it comes to hiveminds.

    • queermunist she/her@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 hours ago

      You literally called it borderline magic.

      Don’t do that? They’re pattern recognition engines, they can produce some neat results and are good for niche tasks and interesting as toys, but they really aren’t that impressive. This “borderline magic” line is why they’re trying to shove these chatbots into literally everything, even though they aren’t good at most tasks.