

Oh, we all know what’s in the Russian kompromat, Donnie, and what you really really want is the Nobel Piss Prize.
The only reason I won’t piss on your grave once you’re gone is because you’d no doubt enjoy it.


Oh, we all know what’s in the Russian kompromat, Donnie, and what you really really want is the Nobel Piss Prize.
The only reason I won’t piss on your grave once you’re gone is because you’d no doubt enjoy it.


No worries mate, we can’t all be experts of every field and every topic!
Besides there are other AI models that are relatively small and depend on processing power more than RAM. For example there’s a bunch of audio analysis tools that don’t just transcribe information but also diarise it (split it up by speaker), extract emotional metadata (e.g. certain models can detect sarcasm quite well, others spot general emotions like happiness or sadness or anger), and so on. Image categorisation models are also super tiny, though usually you’d want to load them into the DSP-connected NPU of appropriate hardware (e.g. a newer model “smart” CCTV camera would be using a SoC that has NPU to load detection models into, and do the processing for detecting people, cars, animals, etc. onboard instead of on your NVR).
Also by my count, even somewhat larger training systems such as micro wakeword training, would fit into the 196MB V-Cache.


Oh, good to know. Last time I checked around WASM this wasn’t really an option.


AI workflows aren’t limited to LLMs you know.
For example, TTS and STT models are usually small enough (15-30MB) to be loaded directly into V-cache. I was thinking of such small scale local models, especially when you consider AMD’s recent forays into providing a mixed environment runtime for their hardware (GAIA framework that can dynamically run your ML models on CPU, NPU and GPU, all automagically)


See the main issue with that is you need to bundle everything into the app.
Modern computing is inherently cross-dependent on runtimes and shared libraries and whatnot, to save space. Why bundle the same 300MB runtime into five different apps when you can download it once and share it between the apps? Or even better, have a newer, backwards compatible version of the runtime installed and still be able to share it between apps.
With WASM you’re looking at bundling every single dependency, every single runtime, framework and whatnot, in the final binary. Which is fine for one-off small things, but when everything is built that way, you’re sacrificing tons of storage and bandwidth unnecessarily.


Disappointing but not unexpected. Most Chinese companies still work on the “absolute secrecy because competitors might steal our tech” ideology. Which hinders a lot of things…


What, you don’t have a few spare photonic vacuums in your parts drawer?


Well, yeah, when management is made up of dumbasses, you get this. And I’d argue some 90% of all management is absolute waffles when it comes to making good decisions.
AI can and does accelerate workloads if used right. It’s a tool, not a person replacement. You still need someone who can utilise the right models, research the right approaches and so on.
What companies need to realise is that AI accelerating things doesn’t mean you can cut your workforce by 70-90%, and still keep the same deadlines, but that with the same workforce you can deliver things 3-4 times faster. And faster delivery means new products (let it be a new feature or a truly brand new standalone product) have a lower cost basis even though the same amount of people worked on them, and the quicker cadence means quicker idea-to-profits timeline.


It actually makes some sense.
On my 7950X3D setup the main issue was always making sure to pin games to a specific CCD, and AMDs tooling is… quite crap at that aspect. Identifying the right CCD was always problematic for me.
Eliminating this by adding V-Cache to both CCDs so it doesn’t matter which one you pin it to is a good workaround. And IIRC V-Cache also helps certain (local) AI workflows as well, meaning running a game next to such a model won’t cause issues, as both gets its own CCD to run on.


Here’s a better idea, allow people to carry certain self defence items.
It’s been clear for a while that the Met can’t be arsed to actually protect the people of London - there’s no preventative, retrospective or any kind of action for most “petty” crime, as long as you don’t get stabbed, you get a case number that gets closed off, and have fun dealing with your insurance company who’ll make you pay an exorbitant premium and still refuse to cover certain items.
I’d agree with the ban of self defence items if policing was in place that would prevent petty crime like pickpockets and muggings and robberies, but the police has devolved into a ticketing system thanks to the Tories restricting budgets and selling off assets that would’ve generated income… and until the Met is restored to a properly functioning institution - including a serious revamp regarding the rampant racism, sexism, etc. taking place between the officers - people need a way to defend themselves without relying on the police.
There’s always dookie in the banana stand?