For one month beginning on October 5, I ran an experiment: Every day, I asked ChatGPT 5 (more precisely, its “Extended Thinking” version) to find an error in “Today’s featured article”. In 28 of these 31 featured articles (90%), ChatGPT identified what I considered a valid error, often several. I have so far corrected 35 such errors.

  • chronicledmonocle@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    3
    ·
    5 hours ago

    Congrats. You just burned down 4 trees in the rainforest for every article you had an LLM analyze.

    LLMs can be incredibly useful, but everybody forgets how much of an environmental nightmare this shit is.

    • Pika@rekabu.ru
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      4 hours ago

      Not much when you use an already trained model, actually.

      • SoftestSapphic@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 hours ago

        Unfortunately unless you are hosting your own, or using like DeepSeek which had a cutoff on its training data, then it is a perpetually training model.

        When you ask ChatGPT things it is horrible for the world. It digs us a little deeper into an unsalvageable situation that will probably make us go extinct