Weight Comparison

Model Weight (grams) Screen Size
LG Gram Pro 16 (2026) 1,199 16-inch
MacBook Air 15 (M4/M3) 1,510 15-inch
MacBook Pro 14 (M5/M3) 1,550-1,600 14-inch
MacBook Pro 16 (M3+) 2,140-2,200 16-inch
      • TheOakTree@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        I feel like the “AI capable” marketed CPUs are a sham. For the average user, it’s just going to feel slow compared to cloud compute, so it’s just training the average person to not bother buying AI-labelled hardware for AI.

        • Glog78@digitalcourage.social
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          2 days ago

          @TheOakTree

          IMHO it’s not the speed. People are patient enough if the result is good. But lets be honest the context windows are damm small to handle local context.
          Try to summarize things which are bigger than a email or a very small article.
          Try to have a slightly bigger codebase…

          And specially this “smaller” local llm’s have a much more limited quality by default without additional informations provided.

          We also don’t wanna talk about the expected prices of DDR5 memory for modern CPU’s. So even if you have a AI CPU from AMD or similar most of those PC’s won’t have 64+GB ram ->

          Try of a bigger content window
          QWEN3:4b with 256k ctx

          • TheOakTree@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 days ago

            Oh, certainly. The reason I focused on speed is because an idiot using a shoddy LLM may not notice it’s hallucinations or failures as easily as they’d notice it’s sluggishness.

            However, the meaningfulness of the LLM’s responses are a necessary condition, whereas the speed and convenience is more of a sufficient condition (which contradicts my first statement). Either way, I don’t think the average users knows what hardware they need to leverage local AI.

            My point is that this “AI” hardware gives a bad experience and leaves a bad impression of running AI locally, because 98% of people saw “AI” in the CPU model and figured it should work. And thus, more compute is pushed to datatcenters.