• Jhex@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    1 day ago

    And even though NVIDIA is better place as they do produce something, but the something in play has little value out of the AI bubble.

    NVIDIA could be left holding the bag on a super increased capacity to produce something that nobody wants anymore (or at least nowhere near at the levels we have now) so they are still very much exposed.

      • Jhex@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        23 hours ago

        me too, but the GPU used for AI are not the same as what we would use at home.

        maybe the factories can produce both kinds and they would be cheaper, but it is speculation at this point

        • enumerator4829@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          It’s literally the same chip designers, production facilities and software. Every product using <5nm silicon fabs compete for the same manufacturing capabilities (fab time at TSMC in Taiwan) and all Nvidia GPUs share lots of commonalities in their software stack.

          The silicon fab producing the latest Blackwell AI chips is the same fab producing the latest consumer silicon for both AMD, Apple, Intel and Nvidia. (Let’s ignore the fabs making memory for now.) Internally at Nvidia, I assume they have shuffled lots and lots of internal resources over from the consumer oriented parts of the company to the B2B oriented parts, severely reducing consumer focus.

          And then we have any intentional price inflation and market segmentation. Cheap consumer GPUs that are a bit too efficient at LLM inference will compete with Nvidias DC offerings. The amount of consumer grade silicon used for AI inference is already staggering, and Nvidia is actively holding back that market segment.

    • kadu@scribe.disroot.org
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      5
      ·
      edit-2
      1 day ago

      but the something in play has little value out of the AI bubble.

      You’re delusional if you think GPUs are of little value. LLMs and fancy image generation are a bubble.

      The gargantuan computational cost of running the machine learning processing that is now required for protein folding and molecular docking is not.

      • ayyy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        Sure, but the scientists doing those kinds of workflows don’t have anywhere near the money to burn on GPUs. Even before they had all of their funding cut off for being to gay or brown or whatever crap the Nazis have come up with.

        • kadu@scribe.disroot.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          18 hours ago

          Sure, but the scientists doing those kinds of workflows don’t have anywhere near the money to burn on GPUs

          I’m working in a lab that is purchasing a cluster with a price tag you wouldn’t believe even if I could share it, which I can’t. We are publicly funded. Scientists are buying this hardware, for this price, because the speed up we get is tremendous.

        • bookmeat@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          22 hours ago

          This is just a small part of the perpetual cycle of growth and contraction. Growth comes from breakthroughs and innovations. Contraction comes from mis-allocation of resources and the need to extract efficiency from the breakthrough and innovation.

          So now everything is booming and growing. This will slow down and if it becomes efficient enough it will remain useful and accessible. If not, it will be discarded and another breakthrough will take its place.

      • Jhex@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        23 hours ago

        The gargantuan computational cost of running the machine learning processing that is now required for protein folding and molecular docking is not.

        Sure but do you need the absolute gargantuan capacity that is being built right now for that? if so, for how long and at what cost?

        The point is not that GPU per se are of little value… the point is that what would you do with 10,000 rocket ships if you only have 1000 projects that may be able to use them? and what can those projects actually pay? can they cover the cost of the 10,000 rockets you built?