

… And Mac’s target audience can buy CrossOver (74$, though). Or install DREAMM for many LucasArts games, the speed of development of this thing and its functionality are amazing, I wonder how much vibecoding was involved.
Rephrasing a common quote - talk is cheap, that’s why I talk a lot.


… And Mac’s target audience can buy CrossOver (74$, though). Or install DREAMM for many LucasArts games, the speed of development of this thing and its functionality are amazing, I wonder how much vibecoding was involved.


They were making a lot on build quality, convenience, brand, ecosystem, cultism, software quality, but not so much on power.
Now power became more expensive for suppliers, and for things listed before it you have to restructure marketing and everything. Apple doesn’t have that problem. They also have rid themselves of the legacy problem by two softer changes (dropping 32 bit Intel, then moving to ARM) instead of one hard change.


Apple user share is beneficial for Linux user share.


One thing I don’t appreciate about Apple is that you have to use a dong(le) concentrator always.
But yes, iPad laptop version is what this is and a thing in demand.
They were preparing for an offensive and it’s starting! The order is given, we are starting to bomb Wintel.
It’ll be a better world. MacOS devices are pretty normal in the sense of being locked down, as compared to iOS. And there will be some competition. Apple winning is good, they’ll raise quality standards. And they won’t kill MS completely, just eat out a piece of the market, perhaps more than half.


Cheaper, but breaking in your hands. In case of laptops mechanical wear is important. This thing might be weak, but last a decade (well, I don’t know).


And those are cheap because law enforcement isn’t so well spent on at all, that is, spending is mostly on secret police, riot police and khashoggizers.
Anyway. Building data centers in an area with hot climate is what seems weirdest. And seawater is not very good for cooling.


You are writing pretentious nonsense, go someplace else.


There are lightweight models as good as some heavier ones. It’s a bit like Intel’s tick-tock advertised process. Heavy memory-hungry models are “tick”, but there’s “tock”- say, “lfm2.5-thinking” model, the light version, in the ollama repository seems almost as good as qwen3.5 for me, except it’s very lightweight and lightning-fast compared to that.
These things are being optimized. It’s just that in the market capture phase nobody bothered.
That they are not being used correctly - yeah, absolutely, my idea of their proper use is some graph-based system with each node being processed by a select LLM (or just piece of logic) with select set of tools and actions and choices available for each. A bit like ComfyUI, but something saner than a zoom-based web UI. Like MacOS Automator application, rather.


It’s “Large Language Model”, and the point is in “Large” and that on really large datasets and well-selected attention dimensions set it’s good at extrapolating language describing real world, thus extrapolating how real world events will be described. So the task is more of an oracle.
I agree that providing anything accurate is not the task. It’s the opposite of the task, actually, all the usefulness of LLMs is in areas where you don’t have a good enough model of the world, but need to make some assumptions.
Except for “diagnose these symptoms”, with proper framework around it (only using it for flagging things, not for actually making decisions, things that have been discussed thousands of times) that’s a valid task for them.


All LLMs are using a tool for the wrong task then, in your opinion? So in the composite object of “LLM” what is the tool and what is the task?


More expensive, but still autonomous which is very precious.


Bad craftsman blames his tools is what I’d answer to this.


Also optical fiber is used a lot on battlefields now. It just remains there. There’s a lot to be assembled.


Models are becoming more optimized. I’ve recently tried LFM2.5, small version, and it’s ridiculously close in usefulness to Qwen3.5, for example. Or RNJ-1.
To maintain, meaning actualized datasets - well, sort of expensive, but they were assembling those as a side effect of their main businesses.
So this is not what’ll kill them. Their size will. These are very big companies with lots of internal corruption and inefficiency pulling them down. And a few new AI companies, which, I think, are going to survive, they are centered around specific products, some will die, but I’d expect LiquidAI or Anthropic or such to still be around some time after the crash.
The crash might coincide with a bubble burst, but notice how this family of technologies really is delivering results. Instead of a bunch of specialized applications people are asking LLMs and getting often good enough answers. LLM agents can retrieve data from web services, perform operations, assist in using tools.
You shouldn’t look at the big ones in the cloud, rather at what value local LLMs give you for energy spent. Right now it’s not that good, but approaching good honestly. I don’t feel like they’ve stopped becoming better. Human time is still more expensive. The tools are there, and are being improved, and the humans are slowly gaining experience in using them, and that makes them more efficient in various tasks.
It’s for all kinds of reference and knowledge tools what Google was for search.
And there’s one just amazing thing about these models - they are self-contained, even if some can use tools to access external sources. Our corporate overlords have been building a dependent networked world for 20 years, simply to break it by popularizing a technology that almost neuters that. They were thinking, probably, that they were reaping the crops of the web for themselves, instead they taught everyone that you don’t have to eat at the diner, you can take the food home.


It’s anthropologic.
A common trope in stories is that to gain any kind of scary access you need to find a “hacker” who’ll do that, but it’s at the same time some obscure power that nobody has, not even the company they are “hacking” into.
People still feel as if such news were something unique and couldn’t be repeated just like that, easily, with them and things they use. There’s nothing unique with computers.


Doesn’t make the technology bad. Just should remember that its weak and strong sides are connected to each other logically. Fuzzy logic based on probabilities of the next token in a few attention spaces - thus always artifacts.


People care about what they care about breaking in their hands and exploding into their faces.
ASD and BAD, probably also ADHD.
People also love to assume what they keep on their hard drives and memory sticks is somehow preserved over time and machine time. Bitflips and other physical effects onto your imagined perfect machine are why it’s not, and is as good or worse as what’s written on paper. A cat decides to piss onto your grandpa’s diary and there’s no more diary. Or humidity slowly eats it. With computers it’s even faster.


Each and every one of them, moron. Everything you do on a computer every moment.


I don’t want to use the M-word or the T-word, but those “made up use cases” constitute every computer program in existence.
When I have too many tabs, I press that blue button of the “one tab” extension.