No matter how well reasoned, allegedly fit for purpose or how much something pretends to be it, we shouldn’t be trusting those promises, especially not from people we don’t know. That does not end well neither for the free candy van nor for cybersecurity. Trust like that has been responsible for a lot of attacks over varying vectors and for projects going wrong.
Honestly, cleaning up legacy shit code is already a thing, it’s called consulting.
It won’t be a dedicated career field. The AI bubble is at an all time high, and it works now. What people will realize, is that there is more to a piece of software than just the initial code / prototype. AI is amazing at prototyping, it’s fast and it gives the dopamine rush of bringing something online fast. What AI is not good at is actually creating production ready code. Maintainability, security, operations of AI slop code suck. Massively. Adding features by AI to a vibe coded codebase sucks, and all of this is amplified exponentially if the person vibe coding does not know their shit.
The question on vibe coding is not if it will break, but when. And when it breaks, it does not matter if it’s AI or just bad code. It’s a broken app that needs fixing, and that’s just your regular software engineering job.