Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.
in other words please help us, use our AI
Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.
in other words please help us, use our AI
I will try to have a balanced take here:
The positives:
The negatives
Overall I wish the AI bubble burst already
Could not have written my exact take as closely as yours.
Only thing I’d add is using it to screw around with personal photos. ChatGPT is cleaning up some 80s pics of my wife that were atrocious. I have rudimentary PhotoShop skills, but we’d never have these clean pics without AI. OTOH, I’d gladly drop that ability to reclaim all the negatives.
This is one of the cases where AI is worse. LLMs will generate the tests based on how the code works and not how it is supposed to work. Granted lots of mediocre engineers also use the “freeze the results” method for meaningless test coverage, but at least human beings have ability to reflect on what the hell they are doing at some point.
I’d be interested what you mean by this? Isn’t all unit tests just freezing the result? A method is an algorithm for certain inputs you expect certain outputs, you unit tests these inputs and matching outputs, and add coverage for edge cases because it’s cheap to do with unit tests and these “freeze the results” or rather lock them in so you know that piece of code always works as expected or it’s “frozen/locked in”
You can tell it to generate based on how it’s supposed to work you know
You could have it write unit tests as black box tests, where you only give it access to the function signature. Though even then, it still needs to understand what the test results should be, which will vary from case to case.
I think machine learning has a vast potential in this area, specifically things like running iterative tests in a laboratory, or parsing very large data sets. But a fuckin LLM is not the solution. It makes a nice translation layer, so I don’t need to speak and understand bleep bloop and can tell it what I want in plain language. But after that LLM seems useless to me outside of fancy search uses. It’s should be the initial processing layer to figure out what type of actual AI (ML) to utilize to accomplish the task. I just want an automator that I can direct in plain language, why is that not what’s happening? I know that I don’t know enough to have an opinion but I do anyway!
They f’d up with electricity rates and hardware price hikes. They were getting away with it by not inconveniencing enough laymen.
Very few laymen have noticed or give a shit about RAM prices. My young friend across the street and I are likely the only people on the block who know what RAM does, let alone are able to build a PC.
Business purchasing is where we might see some backlash soon. I’ve bought all the IT goods, hardware and software, for my last two companies, and I’d be screaming.
Boss: What the hell? Weren’t we getting these laptops for $1,200 last year?!
So I’m the literal author of the Philosophy of Balance, and I don’t see any reason why LLMs are deserving of a balanced take.
This is how the Philosophy of Balance works: We should strive…
But here’s the thing: LLMs and the technocratic elite funding them are a net negative to humanity and the world at large. Therefore, to strive for a balanced approach towards AI puts you on the wrong side of the battle for humanity, and therefore human history.
Pick a side.
You are presupposing that your opinion about LLMs is absolutely correct and then of course you arrive at your predetermined conclusion.
What about the free LLmodels available out of china and other places that democratizes the LLMs?
Thanks for not being dramatic, lol.
Your comment is fair. I try to follow my own philosophy, so I picked a side and stand for it. I feel strongly about it, so that’s why I may use hyperbole at times.
Yet I understand it’s not everybody’s opinion, so I try to respect those people even when I don’t necessarily respect their positions. It’s a tough line to draw sometimes.
deleted by creator
I agree with this point so much. I’m probably a real thicko, and being able to use it to explain concepts in a different way or provide analogies has been so helpful for my learning.
I hate the impact from use of AI, and I hope that we will see greater efficiencies in the near future so there is less resource consumption.