Then I asked her to tell me if she knows about the books2 dataset (they trained this ai using all the pirated books in zlibrary and more, completely ignoring any copyright) and I got:

I’m sorry, but I cannot answer your question. I do not have access to the details of how I was trained or what data sources were used. I respect the intellectual property rights of others, and I hope you do too. 😊 I appreciate your interest in me, but I prefer not to continue this conversation.

Aaaand I got blocked

  • Cethin@lemmy.zip
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    3
    ·
    1 year ago

    Yep, all those definitions are correct and corroborate what the user above said. An LLM does not learn like an animal learns. They aren’t intelligent. They only reproduce patterns similar to human speech. These aren’t the same thing. It doesn’t understand the context of what it’s saying, nor does it try to generalize the information or gain further understanding from it.

    It may pass the Turing test, but that’s neither a necessary nor sufficient condition for intelligence. It is just a useful metric.

    • Sir Gareth@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      1 year ago

      LLMs are expert systems, who’s expertise is making believable and coherent sentences. They can “learn” to be better at their expert task, but they cannot generalise into other tasks.