

Some of us are old enough to remember how cats and other creatures would get killed by old car engine designs with the large open fan driven by a belt. They would sleep on that fan housing and not realize the danger when the car was started. So there have been improvements that have helped, maybe not necessarily for that reason. For what it’s worth, I’m on the side of minimizing cars for so many reasons, but it has been worse for animals in the past.



I’ve only found success in LLM code (local) with smaller, more direct sections. Probably because it’s pulling from its training data the most repeated solutions to such queries. So for that it’s like a much better Google lookup filter that usually gets to the point faster. But for longer code (and it always wants to give you full code) it will start to drift and pull things out of the void, much like in creative text hallucination but in code it’s obvious.
Because it doesn’t understand what it’s telling you. Again, it’s a great way to mass filter Stack Overflow and Reddit answers, but remember in the past when searching through those, that can work well or be a nightmare. Just like then, don’t take any answer and just plug it in, understand why that might or might be a working solution.
It’s funny, I’ve learned a lot of my programming knowledge through the decades by piecing things together and in the debugging of my own or other’s coding, figured out what works. Not the greatest way to do it, but I learn best through necessity than without a purpose. But with LLM coding that goes wild, debugging has its limits, and there have been minor things that I’ve just thrown out and started over because the garbage I was handed was total BS wrapped up in colorful paper.