• Goodeye8@piefed.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 day ago

    I’m not that concerned with the hardware limitations. Nobody is going to run a full-blown LLM on their laptop, running one on a desktop would already require building a PC with AI in mind. What you’re going to see being used locally are going smaller models (something like 7B using INT8 or INT4). Factor in the efficiency of an NPU and you could get by with 16GB of memory (especially if the models are used in INT4) with little extra power draw and heat. The only hardware concern would be the technological advancement speed of NPUs, but just don’t be an early adopter and you’ll probably be fine.

    But this is where Dells point comes in. Why should the consumer care? What benefits do consumers get by running a model locally? Outside of privacy and security reasons you’re simply going to get a better result by using one of the online AI services because you’d be using a proper model instead of the cheap one that runs with limited hardware. And even for the privacy and security minded people you can just build your own AI server (maybe not today but when hardware prices get back to normal) that you run from home and then expose that to your laptop or smartphone. For consumers to desire running a local model (actually locally and not in a selfhosting kind of way) there would have to be some problem that the local model solve that the over the internet solution can’t solve. So far such a problem doesn’t exist today and there doesn’t seem to be a suitable problem on the horizon either.

    Dell is keeping their foot in the door by still implementing NPUs into their laptops, so if by some miracle some magical problem is found that AI solves they’re ready, but they realize that NPUs are not something they can actually use as a selling point because as it stands, NPUs solve no problems because there’s no benefit to running small models locally.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      More to the point, the casual consumer isn’t going to dig into the nitty gritty of running models locally and not a single major player is eager to help them do it (they all want to lock the users into their datacenters and subscription opportunities).

      On the Dell keeping NPUs in their laptops, they don’t really have much of a choice if they want modern processors, Intel and AMD are all-in on it still.

      • Goodeye8@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Setting up a local model was specifically about people who take privacy and security seriously because that often requires sacrificing convenience, which in this case would be having to build a suitable server and learning the necessary know-how of setting up your own local model. Casual consumers don’t really think about privacy so they’re going to go with the most convenient option, which is whatever service the major players will provide.

        As for Dell keeping the NPUs I forgot they’re going to be bundled with processors.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          My general point is that discussing the intricacies of potential local AI model usage is way over the head of the people that would even in theory care about the facile “AI PC” marketing message. Since no one is making it trivial for the casual user to actually do anything with those NPUs, then it’s all a moot point for this sort of marketing. Even if there were an enthusiast market that would use those embedded NPUs without a distinct more capable infrastructure, they wouldn’t be swayed/satisfied with just ‘AI PC’ or ‘Copilot+’, they’d want to know specs rather than a boolean yes/no for ‘AI’.