• ferrule@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    2 hours ago

    The issue is two fold.

    First the scope of the project is very important. When I am working on a web app the most complicated project is still 90% boilerplate stuff. You write some RESTful code on some framework using CRUD and make a UI that draws based on data. No matter what you are making, lets be honest, it’s not novel. This is why vibe coding can exist. Most of your unit tests can be derived from the types in your functions. Do a little bit of tracing through functions and AI can easily make your code less fragile.

    When you are working on anything more complicated making code better requires you to actually grok the business requirements. Edge cases aren’t as simple. The reasons for doing things a specific way aren’t so superficial. Especially when you start having to write optimizations the compilers don’t do automatically.

    The second issue is learning matterial. The majority of the code we write is buggy. Not just in range testing but in solution to problems. There is a reason why we don’t typically write once and never go back to our code.

    Now think about when you, as a human, go back over old code. The commit log and blame usually don’t give a great picture of why the change was needed. Not unless the dev was really detailed in their documentation. And even then it requires domain knowledge and conceptualization that AI still can’t do.

    When teaching humans to be be better at development we suck at it even when we can grok the language and the business needs. That is a hurdle we still need to cross with AI.