For one month beginning on October 5, I ran an experiment: Every day, I asked ChatGPT 5 (more precisely, its “Extended Thinking” version) to find an error in “Today’s featured article”. In 28 of these 31 featured articles (90%), ChatGPT identified what I considered a valid error, often several. I have so far corrected 35 such errors.


Unfortunately unless you are hosting your own, or using like DeepSeek which had a cutoff on its training data, then it is a perpetually training model.
When you ask ChatGPT things it is horrible for the world. It digs us a little deeper into an unsalvageable situation that will probably make us go extinct