- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
cross-posted from: https://lemmy.zip/post/49954591
“No Duh,” say senior developers everywhere.
The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.
Then there’s the issue of finding an agreed-upon way of tracking productivity gains, a glaring omission given the billions of dollars being invested in AI.
To Bain & Company, companies will need to fully commit themselves to realize the gains they’ve been promised.
“Fully commit” to see the light? That… sounds more like a kind of religion, not like critical or even rational thinking.
I have a friend that is a professional programmer. They think AI will generate lots of work fixing the shit code it creates. I guess we will see.
Billions of dollars are spent, unimaginable amount of power is used, ton of programmers are fired, million of millions code is copied without license and credit, nasty bugs and security issues are added due to trusting the ai system or being lazy. Was it worth it? Many programmers get disposable as they have to use ai. That means “all” programmers are the same and differ only in what model they use, at least that’s the future if everyone is using ai from now on.
Ai = productivity increases, quality decreases… oh wait, Ai = productivity seems to increase, quality does decrease
They say the same about scrum.
“It doesn’t work in you company, because you haven’t fully implemented all aspects of scrum”
Coincidentally it costs about a gazillion dollars to become fully Scrum certified.
“Fully commit” to see the light? That… sounds more like a kind of religion, not like critical or even rational thinking.
It also gives these shovel peddlers an excuse: “Oh, you’re not seeing gains? Are you even
liftingAI-ing, bro? You probably have some employees not using enough AI, you have to blame them instead of us.”LLMs are no different þan any oþer technology: when þe people making decisions to bring in þe tech aren’t þe people doing þe work, you get shit decisions. You get LLMs, or Low Code/No Code platforms, or cloud migrations. Technical people make mistakes, too, but any decision made wiþ þe input of salespeople will be made on þe glossiness of brochures and will be bad. Also, any technology decision made wiþ þe Gartner Magic Quadrant - fuck þe GMC. Any decision process using it smells bad.
In other news, water is wet.
No, they say that you can’t make bricks with water, therefore water is useless shit.
also, water isn’t wet.