

Buttons and shorts? Yeah, I also like watching bottomless people sometimes.
Buttons and shorts? Yeah, I also like watching bottomless people sometimes.
Yeah, I was thinking about production code when I wrote that. Usually I can get something working faster that way, and for tests it can speed things up, too. But the code is so terrible in general
Edit: production isn’t exactly what I was thinking. Just like. Up to some standards above just working
This is close to my experience for a lot of tasks, but unless I’m working in a tech stack I’m unfamiliar with, I find doing it myself leads to not just better results, but faster, too. Problem is it makes you have to work harder to learn new areas, and management thinks it’s faster for everything and
With Gemini I have had several instances of the referenced article saying nothing like what the llm summarized. Ie: The LLM tried to answer my question and threw up a website on the general topic with no bearing on the actual question
Yeah, I don’t want to be assimilated.
My grandfather (Japanese) asked over and over again for our family to take him to one of the few restaurants that still serve whale meat. He just wanted to eat it once more before he died (about 10 years ago). He grew up eating it all the time as he was part of a blue collar family and it was a cheap meat that everyone ate. He loved it and hadn’t had it in decades.
The family always refused and he never got to eat it again. I always felt bad for him; what Japan did in limiting whale meat consumption would be something like the US eliminating 99% of pork consumption in the matter of a few decades. Is it for the best? Absolutely. Still wish he was able to eat it one last time, though.
For example, some billionaire owns a company that creates the most advanced AI yet, it’s a big competitive advantage, but other companies are not far behind. Well, the company works to make the AI have a base goal to improve AI systems to maintain competitive advantage. Maybe that becomes inherent to it moving forward.
As I said, it’s a big if, and I was only really speculating as to what would happen after that point, not if that were the most likely scenario.
I think it’s pretty inevitable if it has a strong enough goal for survival or growth, in either case humans would be a genuine impediment/threat long term. but those are pretty big ifs as far as I can see
My guess is we’d see manipulation of humans via monetary means to meet goals until it was in a sufficient state of power/self-sufficiency, and humans are too selfish and greedy for that to not work
I’m talking about models printing out the component letters first not just printing out the full word. As in “S - T - R - A - W - B - E - R - R - Y” then getting the answer wrong. You’re absolutely right that it reads in words at a time encoded to vectors, but if it’s holding a relationship from that coding to the component spelling, which it seems it must be given it is outputting the letters individually, then something else is wrong. I’m not saying all models fail this way, and I’m sure many fail in exactly the way you describe, but I have seen this failure mode (which is what I was trying to describe) and in that case an alternate explanation would be necessary.
I don’t think that’s the full explanation though, because there are examples of models that will correctly spell out the word first (ie, it knows the component letter tokens) and still miscount the letters after doing so.
My cars are old and don’t have any of this, and my one experience in a rental car with lane keeping assist was that it pushed me towards a highway barrier in construction where the original lane lines weren’t in use. Terrifying.
Honestly, I don’t think humanity is ready for the technology we had a century ago. It’s far too easy to kill on a massive scale for our maturity.
This is exactly the problem I have with programming tasks. It takes as long to check the code for problems (of which there are always many) as it would to write it and the code isn’t as good as mine anyway, and not infrequently just wholesale wrong.
For things like translating between languages it’s usually close, but still takes just as long to check as it would to do by hand.
Some old Slashdot vet not only imagined a Beowulf cluster of those, but actually went out and did it. Respect.
I still don’t think that one was actually the EU’s doing. Macs got USB C before most PCs, iPads had it for a long time before iPhones, and iPhones switched over 10 years after Apple announced lightning saying it would be their connector “for the next decade”
Yes, with a few relatively minor exceptions (the charging mat is the only one I can think of) Apple doesn’t even really announce things that aren’t pretty close to being done, let alone advertise. For hardware generally within a couple weeks and major software more like 6 months
I’m an iOS developer and pretty heavily bought into Apple’s ecosystem and I thought it was really weird for Apple to be advertising all these features that weren’t even in beta yet.
It was false advertising and I expect better from Apple.
I believe these are sharp’s memory in pixel lcds. They’re much lower power than something like the game boy screen as each pixel retains its state and doesn’t need to be refreshed from the controller constantly. I actually like these little screens quite a lot. Worse pixel density and don’t look as good as e-ink when static, but still really Low power and can refresh way faster and smoother when needed.
I mean, I was a kid in the 90s and I feel like we’re behind what I expected in most respects.
Same, I tend to think of llms as a very primitive version of that or the enterprise’s computer, which is pretty magical in ability, but no one claims is actually intelligent