For one month beginning on October 5, I ran an experiment: Every day, I asked ChatGPT 5 (more precisely, its “Extended Thinking” version) to find an error in “Today’s featured article”. In 28 of these 31 featured articles (90%), ChatGPT identified what I considered a valid error, often several. I have so far corrected 35 such errors.

  • De Lancre@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    18 hours ago

    Wait, you mean using Large Language Model that created to parse walls of text, to parse walls of text, is a legit use?

    Those kids at openai would’ve been very upset if they could read.

    • lightnsfw@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      Even for that it’s mid at best. I try using co-pilot at work often and it makes shit up constantly.