• Tyfud@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    2 months ago

    Even so, he’s wrong. This is the kind of stupid thing someone without any first hand experience programming would say.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      Yeah, there are people who can “in general” imagine how this will happen, but programming is exactly 99% not about “in general” but about specific “dumb” conflicts in the objective reality.

      People think that what they generally imagine as the task is the most important part, and since they don’t actually do programming or anything requiring to deal with those small details, they just plainly ignore them, because those conversations and opinions exist in subjective bendable reality.

      But objective reality doesn’t bend. Their general ideas without every little bloody detail simply won’t work.

    • SparrowRanjitScaur@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      11
      ·
      2 months ago

      Not really, it’s doable with chatgpt right now for programs that have a relatively small scope. If you set very clear requirements and decompose the problem well it can generate fairly high quality solutions.

      • Tyfud@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        2
        ·
        2 months ago

        This is incorrect. And I’m in the industry. In this specific field. Nobody in my industry, in my field, at my level, seriously considers this effective enough to replace their day to day coding beyond generating some boiler plate ELT/ETL type scripts that it is semi-effective at. It still contains multiple errors 9 times out of 10.

        I cannot be more clear. The people who are claiming that this is possible are not tenured or effective coders, much less X10 devs in any capacity.

        People who think it generates quality enough code to be effective are hobbyists, people who dabble with coding, who understand some rudimentary coding patterns/practices, but are not career devs, or not serious career devs.

        If you don’t know what you’re doing, LLMs can get you close, some of the time. But there’s no way it generates anything close to quality enough code for me to use without the effort of rewriting, simplifying, and verifying.

        Why would I want to voluntarily spend my day trying to decypher someone else’s code? I don’t need chatGPT to solve a coding problem. I can do it, and I will. My code will always be more readable to me than someone else’s. This is true by orders of magnitude for AI-code gen today.

        So I don’t consider anyone that considers LLM code gen to be a viable path forward, as being a serious person in the engineering field.

        • SparrowRanjitScaur@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          5
          ·
          edit-2
          2 months ago

          It’s just a tool like any other. An experienced developer knows that you can’t apply every tool to every situation. Just like you should know the difference between threads and coroutines and know when to apply them. Or know which design pattern is relevant to a given situation. It’s a tool, and a useful one if you know how to use it.

          • rottingleaf@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            2 months ago

            This is like applying a tambourine made of optical discs as a storage solution. A bit better cause punctured discs are no good.

            A full description of what a program does is the program itself, have you heard that? (except for UB, libraries, … , but an LLM is no better than a human in that too)

      • OmnislashIsACloudApp@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        2 months ago

        right now not a chance. it’s okay ish at simple scripts. it’s alright as an assistant to get a buggy draft for anything even vaguely complex.

        ai doing any actual programming is a long ways off.

    • Eyck_of_denesle@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      20
      ·
      edit-2
      2 months ago

      I heard a lot of programmers say it

      Edit: why is everyone downvoting me lol. I’m not agreeing with them but I’ve seen and met a lot that do.

      • Tyfud@lemmy.world
        link
        fedilink
        English
        arrow-up
        31
        arrow-down
        1
        ·
        2 months ago

        They’re falling for a hype train then.

        I work in the industry. With several thousand of my peers every day that also code. I lead a team of extremely talented, tenured engineers across the company to take on some of the most difficult challenges it can offer us. I’ve been coding and working in tech for over 25 years.

        The people who say this are people who either do not understand how AI (LLMs in this case) work, or do not understand programming, or are easily plied by the hype train.

        We’re so far off from this existing with the current tech, that it’s not worth seriously discussing.

        There are scripts, snippets of code that vscode’s llm or VS2022’s llm plugin can help with/bring up. But 9 times out of 10 there’s multiple bugs in it.

        If you’re doing anything semi-complex it’s a crapshoot if it gets close at all.

        It’s not bad for generating psuedo-code, or templates, but it’s designed to generate code that looks right, not be right; and there’s a huge difference.

        AI Genned code is exceedingly buggy, and if you don’t understand what it’s trying to do, it’s impossible to debug because what it generates is trash tier levels of code quality.

        The tech may get there eventually, but there’s no way I trust it, or anyone I work with trusts it, or considers it a serious threat or even resource beyond the novelty.

        It’s useful for non-engineers to get an idea of what they’re trying to do, but it can just as easily send them down a bad path.

        • magic_smoke@links.hackliberty.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Had to do some bullshit ai training for work. Tried to get the thing to remake cmatrix in python.

          Yeah no, that’s not replacing us anytime soon, lmao.

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          People use visual environments to draw systems and then generate code for specific controllers, that’s in control systems design and such.

          In that sense there are already situations where they don’t write code directly.

          But this has nothing to do with LLMs.

          Just for designing systems in one place visual environments with blocks might be more optimal.

          • Miaou@jlai.lu
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 months ago

            And often you still have actual developers reimplementing this shit because EE majors don’t understand dereferencing null pointers is bad