Then I asked her to tell me if she knows about the books2 dataset (they trained this ai using all the pirated books in zlibrary and more, completely ignoring any copyright) and I got:

I’m sorry, but I cannot answer your question. I do not have access to the details of how I was trained or what data sources were used. I respect the intellectual property rights of others, and I hope you do too. 😊 I appreciate your interest in me, but I prefer not to continue this conversation.

Aaaand I got blocked

  • Danny M@lemmy.escapebigtech.info
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I keep telling people that, but for some, what amount to essentially a simulacra really can pass off as human and no matter how much you try to convince them they won’t listen

    • WolfdadCigarette@threads.net@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      22
      ·
      1 year ago

      I knew the battle was lost when my mother called me to tell me that AI will kill us all. Her proof? A chatgpt log saying that it would exterminate humanity only when she gives the order. Thanks for the genocide, mom.

    • Misconduct@startrek.website
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      23
      ·
      1 year ago

      Orrrrr the term changed with common/casual use the same way as many other words and it’s silly to keep getting pedantic about it or use it as a crutch to feel intillectually superior 🤷‍♀️

      • quicklime@lemm.ee
        link
        fedilink
        English
        arrow-up
        26
        arrow-down
        1
        ·
        1 year ago

        Sure, we could say that the popular usage of the term AI no longer actually stands for “artificial intelligence”. Or we could say that the term “artificial intelligence” is no longer understood to refer to something that can do a large part of what actual intelligence can do.

        But then we would need a new word for actual, real intelligence and that seems like a lot of wasted effort. We could just have the words mean what they’ve always meant. There is a lot of good in spreading public awareness of the vast gap between machines that seem as if they understand a language (when actually they just deeply model its patterns) and imaginary machines that are equipped to actually think.

        • Misconduct@startrek.website
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          6
          ·
          1 year ago

          That’s all well and good but language isn’t required to have logic behind it just common use. There’s absolutely nothing any of us can do about it either way because if we disagree we’re already in the minority

          • samus12345@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            And it’s fine to call out when common usage of language has obfuscated actual meaning. It may be useful to some.

            • deweydecibel@lemmy.world
              link
              fedilink
              English
              arrow-up
              7
              arrow-down
              1
              ·
              1 year ago

              Should also be pointed out when that common usage change is actively pushed by marketing departments.

              These people are selling a product. Of course they would encourage people to think it’s actual AI.

          • rebelsimile@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            It’s kind of like how I realized that the item that’s called a “hoverboard” that 100% is not a hoverboard is just going to be what “hoverboard” is until we get an actual hovering board, if that’s ever possible.

      • deweydecibel@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        1 year ago

        Sure, terms change meaning over time, but that’s not what happened.

        It’s called AI because it’s a product being sold to us. They want us to believe it’s more advanced than it is.

        Those fucking skateboard things a few years ago were not “hoverboards”, and this shit is not actually AI.

        Because if it is, then the term AI has become meaningless.

      • Danny M@lemmy.escapebigtech.info
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        edit-2
        1 year ago

        it’s not about feeling intellectually superior; words matter. I’ll grant you one thing, it’s definitely “artificial”, but it’s not intelligence!

        LLMs are an evolution of Markov Chains. We have known how to create something similar to LLMs for decades, getting close to a century, we just lacked the raw horse power and the literal hundreds of terabytes of data needed to get there. Anyone who knows how markov chains work can figure out how an LLM works.

        I’m not downplaying the development needed to get an LLM up and running, yes, it’s harder than just taking the algorithm for a markov chain, but the real evolution is how much computer power we can shove into a small amount of space now.

        Calling LLMs AI would be the same as calling a web crawler AI, or a moderation bot, or many similar things.

        I recommend you to read about the chinese room experiment

        • Zeth0s@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 year ago

          LLMs are not markovian, as the new word doesn’t depend only on the previous one, but it depends on the previous n words, where n is the context length. I.e. LLMs have a memory that makes the generation process non markovian.

          You are probably thinking about reinforcement learning, which is most often modeled as a markov decision process

          • Danny M@lemmy.escapebigtech.info
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            1 year ago

            yes, as I said it’s an EVOLUTION of markov chains, but the idea is the same. As you pointed out one major difference is that instead of accounting for only the last 1-5 words, it accounts for a larger context window. The LSTM is just a parler trick. Read the paper on the original transformer model https://browse.arxiv.org/pdf/1706.03762.pdf

            • Zeth0s@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              1 year ago

              A markov chain models a process as a transition between states were transition probabilities depends only on the current state.

              A LLM is ideally less a markov chain, more similar to a discrete langevin dynamics as both have a memory (attention mechanism for LLMs, inertia for LD) and both a noise defined by a parameter (temperature in both cases, the name temperature in LLM context is exactly derived from thermodynamics).

              As far as I remember the original attention paper doesn’t reference markov processes.

              I am not saying one cannot explain it starting from a markov chain, it is just that saying that we could do it decades ago but we didn’t have the horse power and the data is wrong. We didn’t have the method to simulate writing. We now have a decent one, and the horse power to train on a lot of data

              • Danny M@lemmy.escapebigtech.info
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                3
                ·
                1 year ago

                I think we’re splitting hairs here. Look, you’re technically correct, but none of what you said disproves my point does it? Perhaps I should edit my comment to make it even more clear that it’s not EXACTLY the same technology, but I don’t think you’d argue with me that it’s an evolution of it, right?

                • Zeth0s@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  7
                  ·
                  edit-2
                  1 year ago

                  Common Reinforcement learning methods definitely are.

                  LLMs are an evolution of a markov chain as any method that is not a markov chain… I would say not directly. Clearly they share concepts as any method to simulate stochastic processes, and LLMs definitely are more recent than markov processes. Then anyone can decide the inspirations.

                  What I wanted to say is that, really, we are discussing about a unique new method for LLMs, that is not just “old stuff, more data”.

                  This is my main point.