

The scary part is how it already somewhat is.
My friend is currently(or at least considering) job hunting because they added AI to their flow and it does everything past the initial issue report.
the flow is now: issue logged -> AI formats and tags the issue -> AI makes the patch -> AI tests the patch and throws it back if it doesn’t work -> AI lints the final product once working -> AI submits the patch as pull.
Their job has been downscaled from being the one to organize, assign and work on code to an over-glorified code auditor who looks at pull requests and says “yes this is good” or “no send this back in”




I do agree, LLM generated code is inaccurate, which is why they have to have the throw it back in stage and a human eye looking at it.
They told me their main concern is that they aren’t sure they are going to properly understand the code the AI is spitting out to be able to properly audit it (which is fair), then of course any issue with the code will fall on them since it’s their job to give final say of “yes this is good”