• chicken@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    6 hours ago

    We’re replacing that journey and all the learning, with a dialogue with an inconsistent idiot.

    I like this about it, because it gets me to write down and organize my thoughts on what I’m trying to do and how, where otherwise I would just be writing code and trying to maintain the higher level outline of it in my head, which will usually have big gaps I don’t notice until spending way too long spinning my wheels, or otherwise fail to hold together. Sometimes a LLM will do things better than you would have, in which case you can just use that code. When it gives you code that is wrong, you don’t have to use it, you can write it yourself at that point, after having thought about what’s wrong with the AI approach and how what you requested should be done instead.

  • melfie@lemy.lol
    link
    fedilink
    arrow-up
    45
    arrow-down
    1
    ·
    16 hours ago

    One major problem I have with Copilot is it can’t seem to RTFM when building against an API, SDK, etc. Instead, it just makes shit up. If I have to go through line by line and fix everything, I might as well do it myself in the first place.

    • MinFapper@startrek.website
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      6 hours ago

      It will if you explicitly ask it to. Otherwise it will either make stuff up or use some really outdated patterns.

      I usually start by asking Claude code to search the Internet for current best practices of whatever framework. Then if I ask it to build something using that framework while that summary is in the context window, it’ll actually follow it

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      9 hours ago

      Or even distinguish between two versions of the same library. Absolutely stupid that LLMs default to writing deprecated code just because it was more common in the training data.

  • floofloof@lemmy.ca
    link
    fedilink
    arrow-up
    36
    arrow-down
    2
    ·
    edit-2
    17 hours ago

    Yeah, the places to use it are (1) boilerplate code that is so predictable a machine can do it, and (2) with a big pinch of salt for advice when a web search didn’t give you what you need. In the second case, expect at best a half-right answer that’s enough to get you thinking. You can’t use it for anything sophisticated or critical. But you now have a bit more time to think that stuff through because the LLM cranked out some of the more tedious code.

    • Corngood@lemmy.ml
      link
      fedilink
      arrow-up
      44
      arrow-down
      2
      ·
      16 hours ago

      (1) boilerplate code that is so predictable a machine can do it

      The thing I hate most about it is that we should be putting effort into removing the need for boilerplate. Generating it with a non-deterministic 3rd party black box is insane.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        4
        ·
        16 hours ago

        Hard disagree. There is a certain level of boilerplate that is necessary for an app to do everything it needs. Django, for example, requires you to specify model files, admin files, view files, form files, etc. that all look quite similar but are dependent on your specific use case. You can easily have an AI write these boilerplate for you because they are strongly related to one another, but they can’t easily be distilled down to something simpler because there are key decisions that need specified.

            • Feyd@programming.dev
              link
              fedilink
              arrow-up
              6
              ·
              11 hours ago

              Easier and quicker, but finding subtle errors in what looks like it should be extremely hard to fuck up code because someone used an LLM for it is getting really fucking old already, and I shudder at all the things like that are surely being missed. “It will be reviewed” is obviously not sufficient

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            3
            ·
            15 hours ago

            Because it’s not worth inventing a whole tool for a one-time use. Maybe you’re the kind of person who has to spin up 20 similar Django projects a year and it would be valuable to you.

            But for the average person, it’s far more efficient to just have an LLM kick out the first 90% of the boilerplate and code up the last 10% themself.

            • AdamBomb@lemmy.sdf.org
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              3
              ·
              6 hours ago

              “Not worth inventing”? Do you have any idea how insanely expensive LLMs are to run? All for a problem whose solution is basically static text with a few replacements?

              • Pennomi@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                5 hours ago

                You’re focused too much on the “inventing” and not enough on the “one time”. A flexible solution can find value even if it’s otherwise inferior to a rigid one.

                • AdamBomb@lemmy.sdf.org
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  5 hours ago

                  If it’s 90% boilerplate like you were saying above, how flexible does it need to be, really? If it only needs to get 90% there, surely a general-purpose scaffolding tool could do the job just as well.

            • Feyd@programming.dev
              link
              fedilink
              arrow-up
              15
              ·
              15 hours ago

              I’d rather use some tool bundled with the framework that outputs code that is up to the current standards and patterns than a tool that will pull defunct patterns from it’s training data, make shit up, and make mistakes that easily missed by a reviewer glazing over it

              • Pennomi@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                2
                ·
                15 hours ago

                I honestly don’t think such a generic tool is possible, at least in a Django context. The boilerplate is about as minimal as is possible while still maintaining the flexibility to build anything.

                • mesa@piefed.social
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  edit-2
                  13 hours ago

                  I just use https://github.com/cookiecutter/cookiecutter and call it a day. No AI required. Probably saves me a good 4 hours in the beginning of each project.

                  Almost all my projects have the same kind of setup nowadays. But thats just work. For personal projects, I use a subset-ish. Theres a custom Admin module that I use to make ALL classes into Django admin models and it takes one import, boom done.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            3
            ·
            16 hours ago

            Sure but it’s a lot less flexible. As much hate as they get, LLMs are the best natural language processors we have. By FAR.

  • Riskable@programming.dev
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    11
    ·
    15 hours ago

    I’m having the opposite experience: It’s been super fun! It can be frustrating though when the AI can’t figure things out but overall I’ve found it quite pleasant when using Claude Code (and ollama gpt-oss:120b for when I run out of credits haha). The codex extension and the entire range of OpenAI gpt5 models don’t provide the same level of “wow, that just worked!” Or “wow, this code is actually well-documented and readable.”

    Seriously: If you haven’t tried Claude Code (in VS Code via that extension of the same name), you’re missing out. It’s really a full generation or two ahead of the other coding assistant models. It’s that good.

    Spend $20 and give it a try. Then join the rest of us bitching that $20 doesn’t give you enough credits and the gap between $20/month and $100/month is too large 😁

    • TehPers@beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      Used Claude 4 for something at work (not much of a choice here and that team said they generate all their code). It’s sycophantic af. Between “you’re absolutely right” and it confidently making stuff up, I’ve wasted 20 minutes and an unknown number of tokens on it generating a non-functional unit test and then failing to solve the type errors and eslint errors.

      There are some times it was faster to use, sure, but only because I don’t have the time to learn the APIs myself due to having to deliver an entire feature in a week by myself (rest of the team doesn’t know frontend) and other shitty high level management decisions.

      At the end of the day, I learned nothing by using it, the tests pass but I have no clue if they test the right edge cases, and I guess I get to merge my code and never work on this project again.

    • mesa@piefed.social
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      10 hours ago

      I just hate that they stole all that licensed code.

      It feels so wrong that people are paying to get access to code…that others put out there as open source. You can see the GPL violations sometimes when it outputs some code from doom or other such projects. Some function made with the express purpose for that library, only to be used to make Microsoft shareholders richer. And to eventually remove the developer from the development. Its really sad and makes me not want to code on GitHub. And ive been on the platform for 15+ years.

      And theres been an uptick in malware libraries that are propagating via Claude. One such example: https://www.greenbot.com/ai-malware-hunt-github-accounts/

      At least with the open source models, you are helping propagate actual free (as in freedom) LLMs and info.

      • locuester@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        12 hours ago

        It feels so wrong that people are paying to get access to code

        We pay for access to a high performance magic pattern machine. Not for direct access to code, which we could search ourselves if we wanted.

        • mesa@piefed.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          10 hours ago

          I disagree.

          Theres nothing magical about copying code, throwing it into a database, and creating an LLM based on mass data. Moreover, its not ethical given the amount of data they had to pull and the licenses Microsoft had to ignore in order to make this work. Heck my little server got hit by the AI web crawlers a while back and DDOSed my tiny little site. You can look up their IP addresses and some of them look at the robots.txt, but a VAST majority did not.

          There is a metric ton of lawsuits hitting the AI companies and they are not winning in all countries: https://sustainabletechpartner.com/topics/ai/generative-ai-lawsuit-timeline/

          • locuester@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            5
            ·
            8 hours ago

            I’m simply saying that I’m not paying for access to the code. I’m paying for access to the high performance magic pattern machine.

            I can and have browsed code all day for 35 years. Magic pattern machine is worth paying for to save time.

            To be clear, stackoverflow and similar sites have also been worth paying for. Now this is the latest thing worth paying for.

            I understand you have ethical concerns. But that doesn’t negate the usefulness of magic pattern machine.

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        12 hours ago

        A pet project… A web novel publishing platform. It’s very fancy: Uses yjs (CRDTs) for collaborative editing, GSAP for special effects (that authors can use in their novels), and it’s built on Vue 3 (with Vueuse and PrimeVue) and Python 3.13 on the backend using FastAPI.

        The editor TipTap with a handful of custom extensions that the AI helped me write. I used AI for two reasons: I don’t know TipTap all that well and I really want to see what AI code assist tools are capable of.

        I’ve evaluated Claud Code (Sonnet 4.5), gpt5, gpt5-codex, gpt5-mini, Gemini 2.5 (it’s such shit; don’t even bother), qwen3-coder:480b, glm-4.6, gpt-oss:120b, and gpt-oss:20b (running locally on my 4060 Ti 16GB). My findings thus far:

        • Claude Code: Fantastic and fast. It makes mistakes but it can correct its own mistakes really fast if you tell it that it made a mistake. When it cleans up after itself like that it does a pretty good job too.
        • gpt5-codex (medium) is OK. Marginally better than gpt5 when it comes to frontend stuff (vite + Typescript + oh-god-what-else-now haha). All the gpt5 (including mini) are fantastic with Python. All the gpt5 models just love to hallucinate and randomly delete huge swaths of code for no f’ing reason. It’ll randomly change your variables around too so you really have to keep an eye on it. It’s hard to describe the types of abominations it’ll create if you let it but here’s an example: In a bash script I had something like SOMEVAR="$BASE_PATH/etc/somepath/somefile" and it changed it to SOMEVAR="/etc/somepath/somefile" for no fucking reason. That change had nothing at all to do with the prompt! So when I say, “You have to be careful” I mean it!
        • gpt-oss:120b (running via Ollama cloud): Absolutely fantastic. So fast! Also, I haven’t found it to make random hallucinations/total bullshit changes the way gpt5 does.
        • gpt-oss:20b: Surprisingly good! Also, faster than you’d think it’d be—even when giving it a huge refactor. This model has lead me to believe that the future of AI-assisted coding is local. It’s like 90% of the way there. A few generations of PC hardware/GPUs and we won’t need the cloud anymore.
        • glm-4.6 and qwen3-coder:480b-cloud: About the same as gpt5-mini. Not as fast as gpt-oss:120b so why bother? They’re all about the same (for my use cases).

        For reference, ALL the models are great with Python. For whatever reason, that language is king when it comes to AI code assist.