Weight Comparison
| Model | Weight (grams) | Screen Size |
|---|---|---|
| LG Gram Pro 16 (2026) | 1,199 | 16-inch |
| MacBook Air 15 (M4/M3) | 1,510 | 15-inch |
| MacBook Pro 14 (M5/M3) | 1,550-1,600 | 14-inch |
| MacBook Pro 16 (M3+) | 2,140-2,200 | 16-inch |
| Model | Weight (grams) | Screen Size |
|---|---|---|
| LG Gram Pro 16 (2026) | 1,199 | 16-inch |
| MacBook Air 15 (M4/M3) | 1,510 | 15-inch |
| MacBook Pro 14 (M5/M3) | 1,550-1,600 | 14-inch |
| MacBook Pro 16 (M3+) | 2,140-2,200 | 16-inch |
Oh, certainly. The reason I focused on speed is because an idiot using a shoddy LLM may not notice it’s hallucinations or failures as easily as they’d notice it’s sluggishness.
However, the meaningfulness of the LLM’s responses are a necessary condition, whereas the speed and convenience is more of a sufficient condition (which contradicts my first statement). Either way, I don’t think the average users knows what hardware they need to leverage local AI.
My point is that this “AI” hardware gives a bad experience and leaves a bad impression of running AI locally, because 98% of people saw “AI” in the CPU model and figured it should work. And thus, more compute is pushed to datatcenters.