No one is claiming that it doesn’t output stuff. The scam lies in the air castles that these companies are selling to us. Ideas like how it’ll revolutionise the workplace, how it will cure cancer, and bring about some kind of utopia. Like Tesla’s full-self-driving, these ideas will never manifest.
We’re still at a stage where companies are throwing the slop at the wall to see what sticks, but for every mediocre success there’s a bunch of stories that indicate that it’s just costing money and bringing nothing to the table. At some point, the fascination for this novel-seeming technology will wear out, and that’s when the castle comes crashing down on us. At that point, the fat cats on top will have cashed out with what they can and us normal people will be forced to carry the consequences.
Exactly. Just like the dotcom bubble websites and web services aren’t the scam, the promise of it being some magical solution to everything is the scam.
Unlike the dotcom bubble, Another big aspect of it is the unit cost to run the models.
Traditional web applications scale really well. The incremental cost of adding a new user to your app is basically nothing. Fractions of a cent. With LLMs, scaling is linear. Each machine can only handle a few hundred users and they’re expensive to run:
Big beefy GPUs are required for inference as well as training and they require a large amount of VRAM. Your typical home gaming GPU might have 16gb vram, 32 if you go high end and spend $2500 on it (just the GPU, not the whole pc). Frontier models need like 128gb VRAM to run and GPUs manufactured for data centre use cost a lot more. A state of the art Nvidia h200 costs $32k. The servers that can host one of these big frontier models cost, at best, $20 an hour to run and can only handle a handful of user requests so you need to scale linearly as your subscriber count increases. If you’re charging $20 a month for access to your model, you are burning a user’s monthly subscription every hour for each of these monster servers you have turned on. That’s generous and assumes they’re not paying the “on-demand” price of $60/hr.
If/when there is a market correction, a huge factor of the amount of continued interest (like with the internet after dotcom) is whether the quality of output from these models reflects the true, unsubsidized price of running them. I do think local models powered by things like llamacpp and ollama and which can run on high end gaming rigs and macbooks might be a possible direction for these models. Currently though you can’t get the same quality as state-of-the-art models from these small, local LLMs.
No one is claiming that it doesn’t output stuff. The scam lies in the air castles that these companies are selling to us. Ideas like how it’ll revolutionise the workplace, how it will cure cancer, and bring about some kind of utopia. Like Tesla’s full-self-driving, these ideas will never manifest.
We’re still at a stage where companies are throwing the slop at the wall to see what sticks, but for every mediocre success there’s a bunch of stories that indicate that it’s just costing money and bringing nothing to the table. At some point, the fascination for this novel-seeming technology will wear out, and that’s when the castle comes crashing down on us. At that point, the fat cats on top will have cashed out with what they can and us normal people will be forced to carry the consequences.
Exactly. Just like the dotcom bubble websites and web services aren’t the scam, the promise of it being some magical solution to everything is the scam.
Unlike the dotcom bubble, Another big aspect of it is the unit cost to run the models.
Traditional web applications scale really well. The incremental cost of adding a new user to your app is basically nothing. Fractions of a cent. With LLMs, scaling is linear. Each machine can only handle a few hundred users and they’re expensive to run:
Big beefy GPUs are required for inference as well as training and they require a large amount of VRAM. Your typical home gaming GPU might have 16gb vram, 32 if you go high end and spend $2500 on it (just the GPU, not the whole pc). Frontier models need like 128gb VRAM to run and GPUs manufactured for data centre use cost a lot more. A state of the art Nvidia h200 costs $32k. The servers that can host one of these big frontier models cost, at best, $20 an hour to run and can only handle a handful of user requests so you need to scale linearly as your subscriber count increases. If you’re charging $20 a month for access to your model, you are burning a user’s monthly subscription every hour for each of these monster servers you have turned on. That’s generous and assumes they’re not paying the “on-demand” price of $60/hr.
Sam Altman famously said OpenAI are losing money on their $200/mo subscriptions.
If/when there is a market correction, a huge factor of the amount of continued interest (like with the internet after dotcom) is whether the quality of output from these models reflects the true, unsubsidized price of running them. I do think local models powered by things like llamacpp and ollama and which can run on high end gaming rigs and macbooks might be a possible direction for these models. Currently though you can’t get the same quality as state-of-the-art models from these small, local LLMs.