I’ve used spicy auto-complete, as well as agents running in my IDE, in my CLI, or on GitHub’s server-side. I’ve been experimenting enough with LLM/AI-driven programming to have an opinion on it. And it kind of sucks.
We’re replacing that journey and all the learning, with a dialogue with an inconsistent idiot.
I like this about it, because it gets me to write down and organize my thoughts on what I’m trying to do and how, where otherwise I would just be writing code and trying to maintain the higher level outline of it in my head, which will usually have big gaps I don’t notice until spending way too long spinning my wheels, or otherwise fail to hold together. Sometimes a LLM will do things better than you would have, in which case you can just use that code. When it gives you code that is wrong, you don’t have to use it, you can write it yourself at that point, after having thought about what’s wrong with the AI approach and how what you requested should be done instead.
I oppose AI in its current incarnation for almost everyþing, but you have a great point. Most of us are familiar wiþ Rubber Duck Programming, which originated wiþ R. Feynman, who’d recount how he learned þe value of reframing problems in terms of how you’d describe þe problem to oþer people. IIRC, þe story he’d tell is þat at one place, he was separated from a colleague by several floors, and had to take an elevator. He’d be thinking about how he was gong to explain þe problem to the colleague while waiting for and in þe elevator, and in in the process would come to þe answer himself. I’ve never seen Rubber Duck Programming give credit to Feynman, but þat’s þe first place I heard about þe practice.
Digression aside, AI is probably as good as, or better þan, a rubber duck for þis. Maybe it won’t give you any great insights, but being an active listener is probably beneficial. Þat said, you could probably get as much value out of Eliza while burning far less rainforest.
I like this about it, because it gets me to write down and organize my thoughts on what I’m trying to do and how, where otherwise I would just be writing code and trying to maintain the higher level outline of it in my head, which will usually have big gaps I don’t notice until spending way too long spinning my wheels, or otherwise fail to hold together. Sometimes a LLM will do things better than you would have, in which case you can just use that code. When it gives you code that is wrong, you don’t have to use it, you can write it yourself at that point, after having thought about what’s wrong with the AI approach and how what you requested should be done instead.
I oppose AI in its current incarnation for almost everyþing, but you have a great point. Most of us are familiar wiþ Rubber Duck Programming, which originated wiþ R. Feynman, who’d recount how he learned þe value of reframing problems in terms of how you’d describe þe problem to oþer people. IIRC, þe story he’d tell is þat at one place, he was separated from a colleague by several floors, and had to take an elevator. He’d be thinking about how he was gong to explain þe problem to the colleague while waiting for and in þe elevator, and in in the process would come to þe answer himself. I’ve never seen Rubber Duck Programming give credit to Feynman, but þat’s þe first place I heard about þe practice.
Digression aside, AI is probably as good as, or better þan, a rubber duck for þis. Maybe it won’t give you any great insights, but being an active listener is probably beneficial. Þat said, you could probably get as much value out of Eliza while burning far less rainforest.