• 0 Posts
  • 38 Comments
Joined 2 years ago
cake
Cake day: July 3rd, 2023

help-circle



  • I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren’t detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I’ll give its output a once over to check it with an eye to the details of implementation. It’s nice to get the boilerplate out of the way quickly.

    Don’t get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution–a silver bullet–and it’s not.

    This leads to my biggest fear for the AI field of Computer Science: reality won’t live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.





  • It’s also not all-or-none. Someone who otherwise is really interested in learning the material may just skate through using AI in a class that is uninteresting to them but required. Or someone might have life come up with a particularly strict instructor who doesn’t accept late work, and using AI is just a means to not fall behind.

    The ones who are running everything through an LLM are stupid and ultimately shooting themselves in the foot. The others may just be taking a shortcut through some busy work or ensuring a life event doesn’t tank their grade.


  • I see both points. You’re totally right that for a company, it’s just the result that matters. However, to Bradley’s, since he’s specifically talking about art direction, the journey is important in so much as getting a passable result. I’ve only dabbled with 2D and 3D art, but converting to 3D requires an understanding of the geometries of things and how they look from different angles. Some things look cool from one angle and really bad from another. Doing the real work allows you to figure that out and abandon a design before too much work is put in or modify it so it works better.

    When it comes to software, though, I’m kinda on the fence. I like to use AI for small bits of code and knocking out boilerplate so that I can focus on making the “real” part of the code good. I hope the real, creative, and hard parts of a project aren’t being LLM’d away, but I wouldn’t be surprised if that’s a mandate from some MBA.


  • Yeah, it’s not technically impossible to stop web scrapers, but it’s difficult to have a lasting, effective solution. One easy way is to block their user-agent assuming the scraper uses an identifiable user-agent, but that can be easily circumvented. The also easy and somewhat more effective way is to block scrapers’ and caching services’ IP addresses, but that turns into a game of whack-a-mole. You could also have a paywall or login to view content and not approve a certain org, but that only will work for certain use cases, and that also is easy to circumvent. If stopping a single org’s scraping is the hill to die on, good luck.

    That said, I’m all for fighting ICE, even if it’s futile. Just slowing them down and frustrating them is useful.




  • But there are different types of temporary. Temporary because the code got updated/upgraded or new and better software got implemented feels fine. It feels like your work was part of the never ending march of technical progress. Temporary because it gets ripped out if favor of a different, inferior suite hits hard.

    If my code gets superseded by someone else’s complete rewrite that is better, then I’m all for it. If my code gets thrown out because we’re switching to a different, inferior system that is completely incompatible with my work, then that just hits like a ton of bricks.







  • The thing I’m heartened by is that there is a fundamental misunderstanding of LLMs among the MBA/“leadership” group. They actually think these models are intelligent. I’ve heard people say, “Well, just ask the AI,” meaning asking ChatGPT. Anyone who actually does that and thinks they have a leg up are insane and kidding themselves. If they outsource their thinking and coding to an LLM, they might start getting ahead quickly, but they will then fall behind just as quickly because the quality will be middling at best. They don’t understand how to best use the technology, and they will end up hanging themselves with it.

    At the end of the day, all AI is just stupid number tricks. They’re very fancy, impressive number tricks, but it’s just a number trick that just happens to be useful. Solely relying on AI will lead to the downfall of an organization.