• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle







  • But I don’t think the software can differentiate between the ideas of defined and undefined characters. It’s all just association between words and aesthetics, right? It can’t know that “Homer Simpson” is a more specific subject than “construction worker” because there’s no actual conceptualization happening about what these words mean.

    I can’t imagine a way to make the tweak you’re asking for that isn’t just a database of every word or phrase that refers to a specific known individual that the users’ prompts get checked against and I can’t imagine that’d be worth the time it’d take to create.



  • Of course, one reason I might mind is if the machine uses what it learns from reading my work to produce work that could substitute for my own. But at the risk of hubris, I don’t think that’s likely in the foreseeable future. For me, a human creator’s very humanity feels like the ultimate trump card over the machines: Who cares about a computer’s opinion on anything?

    This is really naïve. A huge number of people simply don’t care about creative works in those terms. We’re all encouraged to treat things as content to be consumed and discarded, not something to be actually thought about in terms of what it was expressing and why. The only value of a creator in that framework is that the creator fuels the machine and AI can fuel the machine. Not especially well at the moment but give it some time.


  • Because you have to have specific knowledge about how AI works to know this is a bad idea. If you don’t have specific knowledge about it, it just sounds futuristic because AI is like a Star Trek thing.

    This current AI craze is largely as big a deal as it is because so few people, including the people using it, have any idea what it is. A cousin of mine works for a guy who asked an AI about a problem and it cited an article about how to fix whatever the problem was, I forgot. He asks my cousin to implement the solution proposed in that article. My cousin searches for it and discovers article doesn’t actually exist, so he says that. And after many rounds of back and forth, of the boss saying “this is the name of the article, this is who wrote it” and my cousin saying “that isn’t a real thing and that author did write about some related topics but there’s no actionable information there”, the boss becomes convinced that this is a John Henry situation where my cousin is trying to make himself look more capable than the AI that he feels threatened by and the argument ends with a shrug and an “Okay, then if it’s so important to you then we can do something else even though this totally would have worked.”

    There really needs to be large-scale education on what language models are actually doing to prevent people from using them for the wrong purposes.