• plantfanatic@sh.itjust.works
    link
    fedilink
    arrow-up
    5
    arrow-down
    14
    ·
    edit-2
    3 days ago

    What part of customize did you not understand?

    And lots fit on personal computers dude, do you even know what different llms there are…?

    One for programming doesn’t need all the fluff of books and art, so now it’s a manageable size. Llms are customizable to any degree, use your own data library for the context data even!

    • BradleyUffner@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      6
      ·
      edit-2
      3 days ago

      What part about how LLMs actually work do you not understand?

      “Customizing” is just dumping more data in to it’s context. You can’t actually change the root behavior of an LLM without rebuilding it’s model.

      • plantfanatic@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        arrow-down
        10
        ·
        edit-2
        3 days ago

        “Customizing” is just dumping more data in to it’s context.

        Yes, which would fix the incorrect coding issues. It’s not an llm issue, it’s too much data. Or remove the context causing that issue. These require a little legwork and knowledge to make useful. Like anything else.

        You really don’t know how these work do you?

        • BradleyUffner@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          4
          ·
          edit-2
          3 days ago

          You do understand that the model weights and the context are not the same thing right? They operate completely differently and have different purposes.

          Trying to change the model’s behavior using instructions in the context is going to fail. That’s like trying to change how a word processor works by typing in to the document. Sure, you can kind of get the formatting you want if you manhandle the data, but you haven’t changed how the application works.

          • SchmidtGenetics@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            7
            ·
            3 days ago

            Why are you so focused on just the training? The data is ALSO the issue.

            Of course if you ignore one fix, that works, of course you can only cry it’s not fixable.

            But it is.

            • BradleyUffner@lemmy.world
              link
              fedilink
              English
              arrow-up
              11
              arrow-down
              4
              ·
              edit-2
              3 days ago

              Why are you so focused on just the training?

              Because I work with LLMs daily. I understand how they work. No matter how much I type at an LLM, its behavior will never fundamentally change without regenerating the model. It never learns anything from the content of the context.

              The model is the LLM. The context is the document of a word processor.

              A Jr developer will actually learn and grow in to a Sr developer and will retain that knowledge as they move from job to job. That is fundamentally different from how an LLM works.

              I’m not anti-AI. I’m not “crying” about their issues. I’m just discussing the from a practical standpoint.

              LLMs do not learn.

              • SchmidtGenetics@lemmy.world
                link
                fedilink
                arrow-up
                3
                arrow-down
                8
                ·
                edit-2
                3 days ago

                Because I work with LLMs daily. I understand how they work.

                Clearly you don’t, because context data modifies how the training data extrapolates.

                You can use something, while not being educated on how to use it. And just using something does not mean you understand how they work. Your comments have made it QUITE clear that you have no idea.

                People who just whing about AI and pretend they know how they work are the worst kind of people right now.

                • BradleyUffner@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  9
                  arrow-down
                  2
                  ·
                  edit-2
                  3 days ago

                  Your comments have made it QUITE clear that you have no idea.

                  Odd, I can say the exact same thing about your comments on the subject.

                  We are clearly at an impasse that won’t be solved through this discussion.

        • TJA!@sh.itjust.works
          link
          fedilink
          arrow-up
          10
          arrow-down
          3
          ·
          3 days ago

          But

          All the fluff from books and art

          Is not inside the context, that comes from training. So you know how an llm works?

          • SchmidtGenetics@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            10
            ·
            edit-2
            3 days ago

            Where do you think the errors are coming from? From data bleed over, the word “coding” shows up in books, so yes the context would incorrectly pull book data too.

            Or do you not realize coding books exist as well…? And would be in the dataset.

              • SchmidtGenetics@lemmy.world
                link
                fedilink
                arrow-up
                4
                arrow-down
                10
                ·
                edit-2
                3 days ago

                Because that’s how they work…? It’s not an actual physical book… you don’t seriously think this do you…? it’s the text data inside, like any other text file it would use for context.

                Where do you think it gets its data from…?

                • TJA!@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  9
                  arrow-down
                  5
                  ·
                  3 days ago

                  From the training.

                  I will stop now replying to, because you clearly need to learn more about llms.

                  Here, have a fish 🐟

      • SchmidtGenetics@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        13
        ·
        edit-2
        3 days ago

        If it’s constantly making an error, fix the context data dude. What about it an llm/ai makes you think this isn’t possible…? Lmfao, you just want to bitch about ai, not comprehend how they work.