• 0 Posts
  • 411 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle










  • That can’t be good. But I guess it was inevitable. It never seemed like Arc had a sustainable business model.

    It was obvious from the get-go that their ChatGPT integration was a money pit that would eventually need to be monetized, and…I just don’t see end users paying money for it. They’ve been giving it away for free hoping to get people hooked, I guess, but I know what the ChatGPT API costs and it’s never going to be viable. If they built a local-only backend then maybe. I mean, at least then they wouldn’t have costs that scale with usage.

    For Atlassian, though? Maybe. Their enterprise customers are already paying out the nose. Usage-based pricing is a much easier sell. And they’re entrenched deeply enough to enshittify successfully.




  • Yeah, that’s true for a subset of code. But for others, the hardest parts happen in the brain, not in the files. Writing readable code is very very important, especially when you are working with larger teams. Lots of people cut corners here and elsewhere in coding, though. Including, like, every startup I’ve ever seen.

    There’s a lot of gruntwork in coding, and LLMs are very good at the gruntwork. But coding is also an art and a science and they’re not good at that at high levels (same with visual art and “real” science; think of the code equivalent of seven deformed fingers).

    I don’t mean to hand-wave the problems away. I know that people are going to push the limits far beyond reason, and I know it’s going to lead to monumental fuckups. I know that because it’s been true for my entire career.


  • If I’m verifying anyway, why am I using the LLM?

    Validating output should be much easier than generating it yourself. P≠NP.

    This is especially true in contexts where the LLM provides citations. If the AI is good, then all you need to do is check the citations. (Most AI tools are shit, though; avoid any that can’t provide good, accurate citations when applicable.)

    Consider that all scientific papers go through peer review, and any decent-sized org will have regular code reviews as well.

    From the perspective of a senior software engineer, validating code that could very well be ruinously bad is nothing new. Validation and testing is required whether it was written by an LLM or some dude who spent two weeks at a coding “boot camp”.