The nature of these discussions and the fact that MS described AGI as “an LLM service that reaches $100 B per year in revenue” is evidence that much of the marketing around “AI” is basically fraudulent.
Clearly LLMs specifically and ML models in general have many powerful use cases, but that doesn’t mean the people involved aren’t running a scheme to profit off the hype.
Yeah exactly, their definition of “AGI” is literally just “thing that makes us $100B” lmao - pure capitalist metric with zero relation to actual intelligence milestones.
The nature of these discussions and the fact that MS described AGI as “an LLM service that reaches $100 B per year in revenue” is evidence that much of the marketing around “AI” is basically fraudulent.
Clearly LLMs specifically and ML models in general have many powerful use cases, but that doesn’t mean the people involved aren’t running a scheme to profit off the hype.
Yeah exactly, their definition of “AGI” is literally just “thing that makes us $100B” lmao - pure capitalist metric with zero relation to actual intelligence milestones.
I bet this rhetoric goes so hard, the moment we reach an economicly usefull “agi” it will be all hands on board to stop it from going ASI.
They specifically want ai that can follow orders without thinking for themselves.
LLMs cannot think or draw conclusions, it’s just guessing based on the content of old reddit posts.
Ai can be way more then just a single llm though.
Agi and Asi still mean human level and beyond human level ai.
Wether the concepts are archievable in our lifetime is an entirely different matter.