Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.
in other words please help us, use our AI

you never had it to begin with. Goddamn leeches.
Maybe they should look into selling AI CP since it seems to be great at generating that shit
AI industry needs to encourage job seekers to pick up AI skills (undefined), in the same way people master Excel to make themselves more employable.
Has anyone in the last 15 years willingly learned excel? It seems like one of those things you have to learn on the job as your boomer managers insist on using it.
Excel depends on the usage. Way too many people want to use it for what it’s bad at, but technically can do, instead of using it for what it’s good at.
I’m fairly decent at using Excel, and have automated some database dependent tasks for my coworkers through it, which saves us a lot of time doing menial tasks no one actually wants to do.
I willingly learned excel in the past 15 years!
I have since moved on to open source replacements.
I did and it’s awesome. People like to shit on Excel, but there is a reason why every business on earth runs on Excel. It’s a great tool and if you really learn it, you can do great things with it.
I love excel, personally. I’m a big ol’ nerd and love putting shit in a spreadsheet.
Funny thing about “AI skills” that I’ve noticed so far is that they are actually just skills in the thing you’re trying to get AI to help with. If you’re good at that, you can often (though not always) get an effective result. Mostly because you can talk about it at a deeper level and catch mistakes the AI makes.
If you have no idea about the thing, it might look competent to you, but you just won’t be catching the mistakes.
In that context, I would call them thought amplifiers and pretty effective at the whole “talking about something can help debug the problem, even if the other person doesn’t contribute anything of value because you have to look at the problem differently to explain it and that different perspective might make the solution more visible”, while also being able to contribute some valueable pieces.
how else are you going to perform, document, and communicate engineering calculations in a format that is simple, intuitive, flexible, and easy to iterate upon?
I did take a few courses on excel over the last 25 years. I don’t use excel that much but most features will never be used by most people.
Yeah, very good analogy actually…
I remember back in the day people putting stuff like ‘Microsoft Word’ under ‘skills’. Instead of thinking ‘oh good, they will be able to use Word competently’, the impression was ‘my god, they think Word is a skill worth bragging about, I’m inclined to believe they have no useful skills’.
‘Excel skills’ on a resume is just so vague, people put it down when they just figured out they can click and put things into a table, some people will be able to quickly roll some complicated formula, which is at least more of a skill (I’d rather program a normal way than try to wrangle some of the abominations I’ve seen in excel sheets).
Using an LLM is not a skill with a significant acquisition cost. To the extent that it does or does not work, it doesn’t really need learning. If anything people who overthink the ‘skill’ of writing a prompt just end up with stupid superstitions that don’t work, and when they first find out that it doesn’t work, they just grow new prompt superstitions to add to it to ‘fix’ the problem.
Microsoft Word’ under ‘skills’.
Way back in the day a bunch of people endorsed me on linkedin for a bunch of nonsense like that and I manually hid all of it lol
“Microsoft thinks it has social permission to burn the planet for profit” is all I’m hearing.
Well, they at least have investor permission…which is the only people they care about anyway
Probably in the Hobbes sense that they’re not actively revolting
Take away:
- MS is well aware AI is useless.
- Nadella admits they invested G$ in something without having the slightest clue what its use-cas would be (“something something rEpLaCe HuMaNs”)
- Nadella is blissfully unaware of the “social” image MS already has in the eye of the public. You don’t have our social permission to still live as a company!
Well you already lost that or rather never actually had that. You all pushed a broken and incomplete product you need to find a use not us…
“We have to find a compelling use case so we can keep tragedying the commons!”
CEOs aren’t people. That’s why they lobbied to have companies recognized as people. Stop giving them a stage.
I have a use for it. Put it in the recycle bin.
How can you lose social permission that you never had in the first place?
The peasants might light their torches
“Torching” the gas turbines what are on AI companies datacenters would be highly effective. Especially since they are outside and only a fence protects them.
It is so dump what they gas our environment for “AI”. It was evil doing it in WW1 and WW2 and it is still today. See:
- https://www.theguardian.com/technology/2026/jan/15/elon-musk-xai-datacenter-memphis
- https://capitalbnews.org/musk-xai-memphis-black-neighborhood-pollution/
It is insane.
This guy knows how to translate billionaire dipshit speak.
Datacenters are expensive and soft targets.
Dude, building are pretty hard.
Yeah but it’s really easy to hurt their feelings so be mindful
not OP but I believe they’re “soft” in the sense that they don’t have moats/high electric fences/battalions of armed guards around 24/7
With a clipboard you could probably just walk in and start unplugging things
That’s… Not quite true. Usually they take access quite seriously. If in a multi tenant space every space will be separated and the physical cages around the machines locked and monitored.
All the same they are designed to keep small numbers of mostly law abiding people out, not an angry mob with torches.
Challenge accepted
There’s a latency between asking for forgiveness and being demanded to stop.
It’s easier to beg for social forgiveness than it is to ask for social permission
Do something useful
What do you mean, that using ChatGPT for a recipe for eggs, sunny side up without any seasoning or toppings and burning up the electricity of a moderate household for a week with my query isn’t useful?
Allrecipes has you covered.
It’s not the query that burns through electricity like crazy, it’s training the models.
You can run a query yourself at home with a desktop computer, as long as it has enough RAM and compute cells to support the model you’re using (think a few high-end GPUs).
Training a model requires a huge pile of computer power though, and the AI companies are constantly scraping the internet to
stealfind more training material
deleted by creator
- Denial
- Anger
- Bargaining <- They’re here
- Depression
- Acceptance
The five stages of corporate grief:
- lies
- venture capital
- marketing
- circular monetization
- private equity sale
Where do the three envelopes fit in
Roll 2d6 on private equity sale. 7 or higher and you get to ride again with an IPO at position 6. 1-6 and you get to fill out the envelopes.
In my pocket
Correct, but needs clarification:
Depression referring to the whole economy as the bubble burst.
Acceptance is when the government accepts to bail them out because they’re too big and the gov is too dependent on them to let them die.Denial: “AI will be huge and change everything!”
Anger: “noooo stop calling it slop its gonna be great!”
Bargaining: “please use AI, we spent do much money on it!”
Depression: companies losing money and dying (hopefully)
Acceptance: everyone gives up on it (hopefully)
Acceptance: It will be reduced to what it does well and priced high enough so it doesn’t compete with equivalent human output. Tons of useless hardware will flood the market, china will buy it back and make cheap video cards from the used memory.
Which seems like good progress. I feel like they were in denial not three weeks ago.
May the depression be long lasting and heartfelt in the United States of AI.
I will try to have a balanced take here:
The positives:
- there are some uses for this “AI”
- like an IDE it can help speed up the process of development especially for menial tasks that are important such as unit test coverage.
- it can be useful to reword things to match the corpo slang that will make you puke if you need to use it.
- it is useful as a sort of better google, like for things that are documented but reading the documentation makes your head hurt so you can ask it to dumb it down to get the core concept and go from there
The negatives
- the positives don’t justify the environmental externalities of all these AI companies
- the positives don’t justify the pc hardware/silicone price hikes
- shoehorning this into everything is capital R retarded.
- AI is a fucking bubble keeping the Us economy inflated instead of letting it crash like it should have a while ago
- other than a paid product like copilot there is simply very little commercially viable use-case for all this public cloud infrastructure other than targeting with you more ads, that you can’t block because it’s in the text output of it.
Overall I wish the AI bubble burst already
menial tasks that are important such as unit test coverage
This is one of the cases where AI is worse. LLMs will generate the tests based on how the code works and not how it is supposed to work. Granted lots of mediocre engineers also use the “freeze the results” method for meaningless test coverage, but at least human beings have ability to reflect on what the hell they are doing at some point.
Granted lots of mediocre engineers also use the “freeze the results” method for meaningless test coverage,
I’d be interested what you mean by this? Isn’t all unit tests just freezing the result? A method is an algorithm for certain inputs you expect certain outputs, you unit tests these inputs and matching outputs, and add coverage for edge cases because it’s cheap to do with unit tests and these “freeze the results” or rather lock them in so you know that piece of code always works as expected or it’s “frozen/locked in”
LLMs will generate the tests based on how the code works and not how it is supposed to work.
You can tell it to generate based on how it’s supposed to work you know
You could have it write unit tests as black box tests, where you only give it access to the function signature. Though even then, it still needs to understand what the test results should be, which will vary from case to case.
I think machine learning has a vast potential in this area, specifically things like running iterative tests in a laboratory, or parsing very large data sets. But a fuckin LLM is not the solution. It makes a nice translation layer, so I don’t need to speak and understand bleep bloop and can tell it what I want in plain language. But after that LLM seems useless to me outside of fancy search uses. It’s should be the initial processing layer to figure out what type of actual AI (ML) to utilize to accomplish the task. I just want an automator that I can direct in plain language, why is that not what’s happening? I know that I don’t know enough to have an opinion but I do anyway!
They f’d up with electricity rates and hardware price hikes. They were getting away with it by not inconveniencing enough laymen.
So I’m the literal author of the Philosophy of Balance, and I don’t see any reason why LLMs are deserving of a balanced take.
This is how the Philosophy of Balance works: We should strive…
- for balance within ourselves
- for balance with those around us
- and ultimately, for balance with Life and the Universe at large
But here’s the thing: LLMs and the technocratic elite funding them are a net negative to humanity and the world at large. Therefore, to strive for a balanced approach towards AI puts you on the wrong side of the battle for humanity, and therefore human history.
Pick a side.
You are presupposing that your opinion about LLMs is absolutely correct and then of course you arrive at your predetermined conclusion.
What about the free LLmodels available out of china and other places that democratizes the LLMs?
Therefore, to strive for a balanced approach towards AI puts you on the wrong side of the battle for humanity, and therefore human history.
Thanks for not being dramatic, lol.
Your comment is fair. I try to follow my own philosophy, so I picked a side and stand for it. I feel strongly about it, so that’s why I may use hyperbole at times.
Yet I understand it’s not everybody’s opinion, so I try to respect those people even when I don’t necessarily respect their positions. It’s a tough line to draw sometimes.
deleted by creator
it is useful as a sort of better google, like for things that are documented but reading the documentation makes your head hurt so you can ask it to dumb it down to get the core concept and go from there
I agree with this point so much. I’m probably a real thicko, and being able to use it to explain concepts in a different way or provide analogies has been so helpful for my learning.
I hate the impact from use of AI, and I hope that we will see greater efficiencies in the near future so there is less resource consumption.













