

It should at least be able to do cleaning and cooking.
So that’s what we need android girlfriends for.
Rephrasing a common quote - talk is cheap, that’s why I talk a lot.


It should at least be able to do cleaning and cooking.
So that’s what we need android girlfriends for.


The one where a wife was killed was really progressive, though. It’s a shame it’s dead.


People are talking about AI killbots and upcoming crash at the same time, and complain about AI slop and vibe coding.
Sorry, but if something is usable for making killbots, there will be no crash. And AI slop proves that for someone it’s useful to make slop. And vibe coding proves that someone makes things working in production with those tools. Saying that quality suffers is like saying that cobb houses are not comparable to brick houses and vice versa. Both exist. There are places where technologies related to cobb are still common for construction.
But the most important reason is the first one, if some technique gives you a more convenient and sharper stick to kill someone from another tribe, then that something stays as tribe’s cherished wisdom.
That LLMs consume too much resources … You might have noticed there’s a huge space for optimization. They are easy to parallelize, and we are in market capture stage, which means that optimization is not yet a priority. When it becomes a priority, there might happen a moment when all the arguments about operations costing in resources more than they give profit and that being funded by investors are suddenly not true anymore.
I have been converted. Converted back, one might say, there was a time around years 2011-2014.


Which will happen regardless.
Also where there are AI safeguards, they are usually in place because of chain of command and authorization, and those mattered so much because all most likely applications of any AI during the Cold War had a very steep damage curve.
Small killbots don’t have such a damage curve. If they kill someone by mistake, the rest of the population learns to be careful and not raise attention of those operating them. Same reasons as with nukes and radars, where you need chains of specific people with clear authorization to answer why half the world melted, won’t force anyone to put such limits.


BTRON will get a second chance?


Ads back then were so cool, it felt like real magic and it was not hard to believe PS2 is that good.


You can make a lot of things with a good microwave, but just putting something in doesn’t work for that purpose, yes.


a(n effectively) non-deterministic
Almost started to type an angry response to that.
This lady should feel lucky that it only ran amok in her inbox.
I have done that with less than an LLM. Just a typo in my Mutt configuration, and a few hundred e-mails were deleted which shouldn’t have been. After that I decided that removing spam is best done by first sorting into a separate mailbox and then manual revision. Which is an experience of plenty of people.
Which just means that if you use an AI agent (and why not, it appears people do want them), then you should perhaps use many dedicated agents only having access each to its own narrow set of available actions.
It’s more important with things based on fuzzy logic than it is with scripts. But people use Flatpaks and Snaps and AppImages, for isolation among other things, and I have run Skype from separate user under Linux in the olden days (it was such a stupid fashion, everyone wanted Skype, but everyone also considered it proprietary spyware, and nobody thought that an X11 client can spy after the whole display and all keyboard and mouse events anyway ; and that fashion didn’t involve running Skype in Xephyr or Xnest, just from a separate user).
So the thought is not new. These agents should just be used with clear privilege separation, and some uniform way to declare privileges and interfaces for AI agents, and those interfaces simple enough. One can hope.


Separation of server styles, server markup and client styles is definitely something Gemini lacks, not having server styles at all.
But it’s not as much a problem of browsers as it is of the environment in which information is shared and propagated. While we still connect to websites using a browser, those websites will behave however their owners wish, inflating web standards and requiring complex browsers.
I was dreaming of something like “hypertext Usenet”, and making descriptions of another system I was interested in trying to make, I am still not even close to that, and I’m not sure I’m still interested, because it appears NOSTR now has much of what I wanted in its standards.
Basically if you imagine a system for propagating posts addressable by ids and with markup inside, referring to styles and containing hyperlinks by ids to other posts, you can throw away the idea of a website, and still have the hypertext web. That markup can be anything, while the URLs in the links leading to images and such (and other pages) are using those ids or are at least Blossom-compliant.
I think NOSTR of new protocols is the one most likely to eventually attain such functionality. People here wouldn’t like it, I suppose, because of huge intersection with Bitcoin community and because most clients and client libraries are for the web. But there’s now a C client library, functional enough, and architecturally NOSTR is worlds above the thinking of designers of Lemmy, for example.


Oh, I see. So it’s disdain for the open source community, is it.
FOSS has nothing to do with working for free. A freeware program author can work for free and not touch FOSS. A FOSS project can be developed only by people paid wages for that.
I think this sentence made me throw up in my mouth a little… for several reasons.
Economic illiteracy is like that.
OK, everyone can work whatever way they want. I just was in a mood.


Since I like the parallel between that and Star Wars holocrons - perhaps there should be separate dedicated devices. Without networking capability, but with ability to add “layers” of the model in the form of memory cards or optical discs as media, containing database files. It helps that hardware to efficiently run an LLM is different than what’s fit for general-purpose applications. People use GPUs for that, but these can be purely dedicated computers of a particular kind.
The downside is that they’d not be usable for other things. The upside is that you can both have something very convenient and not fear what everyone fears. Display a QR code if you want to copy-paste an answer. Something like that.


This can also apply to spam e-mails. We can acknowledge that the problem doesn’t depend on whether we want to have it.


instead I’m here boning up on the Ferengi Rules of Acquisition.
I mean, Ferengistan is Europe and in wider sense the West in Farsi, so - pretty logical.
(Which is why I don’t subscribe to the theory that Ferengi are an antisemitic trope. They are a subversive futurist trope, “seeing ourselves through the eyes of others the same way we often see them”.)
Everyone likes to see themselves as the heroes of some universe.
It’s also true for some Soviet science fiction, like things by Strugatsky brothers communicate that deep painful wish for “us” to be that society of scientific workers and doctors, and the barbaric and lost people they visit and help to be “them”, but that’s not how the world is. Even the “approved” Ivan Yefremov with his “Bull’s Hour” shows a space colony which is supposedly a remnant of the “capitalist and imperialist” world, yet surprisingly reminisces USSR, while that team of heroes from heaven that comes trying to fix them doesn’t seem like anything from USSR.


which uses statistical likelihood to determine correctness is that historical datasets are likely to contain old information in larger quantities than updated information.
They should make some kind of layered models, where the user sets weight to layers.
But in any case, this is not what I necessarily meant, just that a big project relying upon unpaid maintainers is flawed, especially when somebody makes real buck on it.
There have been plenty of cases of state actors putting in backdoors. Those were human, most likely, and not some bots.


To expose places where people work thanklessly guaranteeing someone’s pretty thankful bottom lines? Working for free isn’t altruism, it’s hurting other workers. For example.
You know, sometimes this capitalism thing seems wiser looking from a pretty marxist standpoint, than other not very well thought through schemes.


In what world do you imagine there wouldn’t be 87 forks that went in a different direction.
In every world. Linux is not just the codebase, it’s all the developer work going into it daily. Hundreds of forks and downstreams can pick whichever direction they want, most of that work will still be directed one way.


Because a gun like that is considered a “garage gun” and those are legal under federal law because it’s essentially impossible to stop somebody from gluing together a pipe and a nail to strike the bullet with and fire it down the pipe barrel
I live in Russia and here this is very illegal. It can suddenly become where you live too.


Which perhaps means that it shouldn’t be thankless and the technology, since it exists, should be used to screen contributions.


They have to choose their battles.
How do you know?