

I think you mean Rust, old timer 😁
Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast
I think you mean Rust, old timer 😁
The AI said that trying to reason with you is a waste of precious tokens.
Over a billion dollars to fight… (checks notes) the concept of people not being 100% aligned masculine/straight/male or feminine/straight/female.
WTF‽ What made them think that the 0.5%-2% of the population (depending on who you ask) that land at the edges of the sex spectrum are worth that level of hate?
These folks are all giving great advice but also let us know when you’re ready to really fuck around and have fun with your Linux superpowers 😀
You, in practically no time at all: “Nearly everything is working great! Now I want to make my desktop change it’s background to NASA’s picture of the day while also putting all my PC’s status monitors on there. Oh! And I want my PC to back itself up every hour over the network automatically with the ability to restore files I deleted last week. I’ve got KDE Connect on my phone and it’s awesome!”
Then, later: “I bought a Raspberry Pi and I want to turn it into a home theater streaming system and emulation station.”
…and later: “What docker images do you guys recommend? I want to setup some home automation. What do you guys think of Pi-hole?”
“I’ve got four Raspberry Pis doing various things in my home and I’m thinking about getting Banana Pi board to be my router. OpenWRT or full Linux on it? What do you guys think?”
…and even later: “I taught myself Python…” 🤣
Correction: Education is not OK.
AI is just giving poor kids the same opportunities rich kids have had for decades. Opportunities for cheating the system that was made specifically not to give students the best education possible but instead to bring them up to speed on the bare minimum required to become factory workers.
Except we don’t have very many factories any more. And we don’t have jobs for all these graduates that pay a living wage.
The banks are going to have to get involved soon. They’re going to have to figure out a way to load up working-age people with long term debt without college being involved.
To summarize: Requiring installation by electricians means that people will still DIY… They just won’t bother to get a permit/get it inspected.
Whereas allowing DIY encourages permits and inspection.
Retiring would be a step down for that guy.
To me, this is like saying, “4chan has turned into a cesspool!” Yeah: It was like that from the start. YOU were the ones that assumed it was ever safe!
You’re posting stuff on the public Internet to a website for adults where literally anyone can sign up and comment FFS.
If you want good moderation you need community moderation from people in that community. Not some giant/evil megacorp!
There’s all sorts of tools and platforms that do this properly, easily, and for free. If you don’t like Meta’s websites move off of them already!
The courts need to settle this: Do we treat AI models like a Xerox copier or an artist?
If it’s a copier then it’s the user that’s responsible when it generates copyright-infringing content. Because they specifically requested it (via the prompt).
If it’s an artist then we can hold the company accountable for copyright infringement. However, that would result in a whole shitton of downstream consequences that I don’t think Hollywood would be too happy about.
Imagine a machine that can make anything… Like the TARDIS or Star Trek replicators. If someone walks up to the machine and says, “make me an Iron Man doll” would the machine be responsible for that copyright violation? How would it even know if it was violating someone’s copyright? You’d need a database of all copyrighted works that exist in order to perform such checks. It’s impossible.
Even if you want OpenAI, Google, and other AI companies to pay for copyrighted works there needs to be some mechanism for them to check if something is copyrighted. In order to do that you’d need to keep a copy of everything that exists (since everything is copyrighted by default).
Even if you train an AI model with 100% ethical sources and paid-for content it’s still very easy to force the model to output something that violates someone’s copyright. The end user can do it. It’s not even very difficult!
We already had all these arguments in the 90s and early 2000s back when every sane person was fighting the music industry and Hollywood. They were trying to shut down literally all file sharing that exists (even personal file shares) and search engines with the same argument. If they succeeded it would’ve broken the entire Internet and we’d be back to using things like AOL.
Let’s not go back there just because you don’t like AI.
This is why combining religion and government is always a bad idea.
To be fair, the world of JavaScript is such a clusterfuck… Can you really blame the LLM for needing constant reminders about the specifics of your project?
When a programming language has five hundred bazillion absolutely terrible ways of accomplishing a given thing—and endless absolutely awful code examples on the Internet to “learn from”—you’re just asking for trouble. Not just from trying to get an LLM to produce what you want but also trying to get humans to do it.
This is why LLMs are so fucking good at writing rust and Python: There’s only so many ways to do a thing and the larger community pretty much always uses the same solutions.
JavaScript? How can it even keep up? You’re using yarn today but in a year you’ll probably like, “fuuuuck this code is garbage… I need to convert this all to [new thing].”
Define, “reasoning”. For decades software developers have been writing code with conditionals. That’s “reasoning.”
LLMs are “reasoning”… They’re just not doing human-like reasoning.
That just means they’d be great CEOs!
According to Wall Street.
I’m not convinced that humans don’t reason in a similar fashion. When I’m asked to produce pointless bullshit at work my brain puts in a similar level of reasoning to an LLM.
Think about “normal” programming: An experienced developer (that’s self-trained on dozens of enterprise code bases) doesn’t have to think much at all about 90% of what they’re coding. It’s all bog standard bullshit so they end up copying and pasting from previous work, Stack Overflow, etc because it’s nothing special.
The remaining 10% is “the hard stuff”. They have to read documentation, search the Internet, and then—after all that effort to avoid having to think—they sigh and start actually start thinking in order to program the thing they need.
LLMs go through similar motions behind the scenes! Probably because they were created by software developers but they still fail at that last 90%: The stuff that requires actual thinking.
Eventually someone is going to figure out how to auto-generate LoRAs based on test cases combined with trial and error that then get used by the AI model to improve itself and that is when people are going to be like, “Oh shit! Maybe AGI really is imminent!” But again, they’ll be wrong.
AGI won’t happen until AI models get good at retraining themselves with something better than basic reinforcement learning. In order for that to happen you need the working memory of the model to be nearly as big as the hardware that was used to train it. That, and loads and loads of spare matrix math processors ready to go for handing that retraining.
The only reason we’re not there yet is memory limitations.
Eventually some company will come out with AI hardware that lets you link up a petabyte of ultra fast memory to chips that contain a million parallel matrix math processors. Then we’ll have an entirely new problem: AI that trains itself incorrectly too quickly.
Just you watch: The next big breakthrough in AI tech will come around 2032-2035 (when the hardware is available) and everyone will be bitching that “chain reasoning” (or whatever the term turns out to be) isn’t as smart as everyone thinks it is.
From a copyright perspective, you don’t need to ask for permission to train an AI. It’s no different than taking a bunch of books you bought second-hand and throwing them into a blender. Since you’re not distributing anything when you do that you’re not violating anyone’s copyright.
When the AI produces something though, that’s when it can run afoul of copyright. But only if it matches an existing copyrighted work close enough that a judge would say it’s a derivative work.
You can’t copyright a style (writing, art, etc) but you can violate a copyright if you copy say, a mouse in the style of Mickey Mouse
. So then the question—from a legal perspective—becomes: Do we treat AI like a Xerox copier or do we treat it like an artist?
If we treat it like an artist the company that owns the AI will be responsible for copyright infringement whenever someone makes a derivative work by way of a prompt.
If we treat it like a copier the person that wrote the prompt would be responsible (if they then distribute whatever was generated).
But that’s no fun at all!
Most people want a system that lets them dress politicians in lame, opposite-sex, revealing clothing. Why else would such a system exist? Nobody cares what they (themselves) would look like in such clothes!
I’m sure in the 2.0 version there will be a “chest” slider—due to popular demand!
The big difference is that updates in Linux happen in the background and aren’t very intrusive. Your hard drive will be used here and there as it unpacks packages but the difference between say, apt, and Windows update is stark. Windows update slows everything down quite a lot.
her goal isn’t to get them to stop, it’s to get them to recognize what garbage writing is and how to fix it so it isn’t garbage anymore.
I wish English teachers did this instead of… Whatever TF they’re doing instead.
This is something they should’ve been doing all along. Long before the invention of LLMs or computers.
No hot dog surprise cereal either, apparently!