We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
I believe it won’t work.
They would have to change so much info that won’t make a coherent whole. So many alternative facts that clash with so many other aspects of life. So asking about any of it would cause errors because of the many conflicts.
Sure it might work for a bit, but it would quickly degrade and will be so much slower than other models since it needs to error correct constantly.
An other thing is that their training data will also be very limited, and they would have to check every single other one thoroughly for “false info”. Increasing their manual labour.