It’s literally the same chip designers, production facilities and software. Every product using <5nm silicon fabs compete for the same manufacturing capabilities (fab time at TSMC in Taiwan) and all Nvidia GPUs share lots of commonalities in their software stack.
The silicon fab producing the latest Blackwell AI chips is the same fab producing the latest consumer silicon for both AMD, Apple, Intel and Nvidia. (Let’s ignore the fabs making memory for now.) Internally at Nvidia, I assume they have shuffled lots and lots of internal resources over from the consumer oriented parts of the company to the B2B oriented parts, severely reducing consumer focus.
And then we have any intentional price inflation and market segmentation. Cheap consumer GPUs that are a bit too efficient at LLM inference will compete with Nvidias DC offerings. The amount of consumer grade silicon used for AI inference is already staggering, and Nvidia is actively holding back that market segment.
I want cheap GPUs at home please!
I’d love this, but not Nvidia.
me too, but the GPU used for AI are not the same as what we would use at home.
maybe the factories can produce both kinds and they would be cheaper, but it is speculation at this point
It’s literally the same chip designers, production facilities and software. Every product using <5nm silicon fabs compete for the same manufacturing capabilities (fab time at TSMC in Taiwan) and all Nvidia GPUs share lots of commonalities in their software stack.
The silicon fab producing the latest Blackwell AI chips is the same fab producing the latest consumer silicon for both AMD, Apple, Intel and Nvidia. (Let’s ignore the fabs making memory for now.) Internally at Nvidia, I assume they have shuffled lots and lots of internal resources over from the consumer oriented parts of the company to the B2B oriented parts, severely reducing consumer focus.
And then we have any intentional price inflation and market segmentation. Cheap consumer GPUs that are a bit too efficient at LLM inference will compete with Nvidias DC offerings. The amount of consumer grade silicon used for AI inference is already staggering, and Nvidia is actively holding back that market segment.