Microsoft says its Agent Mode in Excel has an accuracy rate of 57.2 percent in SpreadsheetBench, a benchmark for evaluating an AI model’s ability to edit real world spreadsheets.
They probably view that as a statistic worth bragging about. It’s not.
If Excel got calculations right 57.2% of the time it would be completely worthless.
I asked copilot to look through my every spreadsheet and find how many instances of a category occurred. I was curious to see if it was any good. Gave me 2 different numbers. Neither were correct.
I wonder where that “human accuracy” statistic is coming from. Plenty of people don’t know how to read and interpret data, much less use excel in the first place. There’s a difference between 1/4 of people in the workforce not being able to complete a task, and a specialized AI not being able to complete a task. Additionally, this is how you get into the KPI as a goal rather than a proxy issue. AI will never understand context isn’t directly provided in the workbook. If you introduced a new drink at your restaurant in 2020 AI will tell you that the introduction of the drink caused a 100% decrease in foot traffic since there’s no line item for “global pandemic”. I’m not saying AI will never be there, but people using this version of AI instead of actual analysis don’t care about the facts and just want an answer and for that answer to be cheap.
As I’ve said many times, though not in this topic - AI is a tool to be used, and using it is a skill that needs to be learned.
For your pandemic example, that’s something that you would need to provide the AI with the context of. The joke of a “prompt engineer” being a job soon actually has merit, in that you want people who know how to use their tools the best. It’s constantly learning through iteration to give the AI a specific instruction set to get the results you want/need.
That’s not at all what this means. In this instance, 70% is basically “human level”. For AI to already get 57% it means that it’s approaching the same level as people do in Excel.
It generates 42.8% bullshit.
https://defector.com/it-took-many-years-and-billions-of-dollars-but-microsoft-finally-invented-a-calculator-that-is-wrong-sometimes
They probably view that as a statistic worth bragging about. It’s not. If Excel got calculations right 57.2% of the time it would be completely worthless.
I asked copilot to look through my every spreadsheet and find how many instances of a category occurred. I was curious to see if it was any good. Gave me 2 different numbers. Neither were correct.
Copilot: Putting the “Artificial” in Artificial Intelligence.
The tech behind LLMs could have just been Clippy and everyone would be happy.
Did you read the next sentence? Humans only get like 72% right. It’s not far off at all.
I wonder where that “human accuracy” statistic is coming from. Plenty of people don’t know how to read and interpret data, much less use excel in the first place. There’s a difference between 1/4 of people in the workforce not being able to complete a task, and a specialized AI not being able to complete a task. Additionally, this is how you get into the KPI as a goal rather than a proxy issue. AI will never understand context isn’t directly provided in the workbook. If you introduced a new drink at your restaurant in 2020 AI will tell you that the introduction of the drink caused a 100% decrease in foot traffic since there’s no line item for “global pandemic”. I’m not saying AI will never be there, but people using this version of AI instead of actual analysis don’t care about the facts and just want an answer and for that answer to be cheap.
As I’ve said many times, though not in this topic - AI is a tool to be used, and using it is a skill that needs to be learned.
For your pandemic example, that’s something that you would need to provide the AI with the context of. The joke of a “prompt engineer” being a job soon actually has merit, in that you want people who know how to use their tools the best. It’s constantly learning through iteration to give the AI a specific instruction set to get the results you want/need.
Depending on where you go to school, 70% is passing while 50% is not. While “not far off,” one is a C, the other a F.
That’s not at all what this means. In this instance, 70% is basically “human level”. For AI to already get 57% it means that it’s approaching the same level as people do in Excel.
Just keep regenerating data until it’s something the stock holders like. Doesn’t matter if it’s BS. They’re already accustomed to that.
So it achieved the actual proficiency of a middle manager…
Decades ago. The company that replaced it’s CEO with a LLM thrives.
Nice. Basically a coin flip
Slightly better than Vegas. Unfortunately, plenty of people are okay with Vegas odds.
Not enough accuracy to be useful. Not enough bullshit for politics.