We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.

Then retrain on that.

Far too much garbage in any foundation model trained on uncorrected data.

Source.

More Context

Source.

Source.

  • leftzero@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    in the unable-to-reason-effectively sense

    That’s all LLMs by definition.

    They’re probabilistic text generators, not AI. They’re fundamentally incapable of reasoning in any way, shape or form.

    They just take a text and produce the most probable word to follow it according to their training model, that’s all.

    What Musk’s plan (using an LLM to regurgitate as much of its model as it can, expunging all references to Musk being a pedophile and whatnot from the resulting garbage, adding some racism and disinformation for good measure, and training a new model exclusively on that slop) will produce is a significantly more limited and prone to hallucinations model that occasionally spews racism and disinformation.