But 99% accuracy is better than any human alive, so while maybe LLMs won’t be able to substitute critical systems, they might just replace all the people around those systems.
Like, we won’t want an AI as the failsafe for a nuclear plant. But we might prefer an AI as the the “person” in charge of this failsafe.
Current generations aren’t even close to that rate, and it’s unclear if it’s economical or even possible to fix the deep structural issues of our current Gen LLMs.
My professional experience with LLMs is that they don’t even approach 20% accuracy for a field as ridiculously structured as programming.
They’re just helpful enough to not be a hindrance.
LLMs will undoubtedly improve as we build more systems around them.
The question is will it ever be reliable enough to trust? You can’t have a 99% reliable critical system.
Topical:
https://arstechnica.com/health/2023/11/ai-with-90-error-rate-forces-elderly-out-of-rehab-nursing-homes-suit-claims/
But 99% accuracy is better than any human alive, so while maybe LLMs won’t be able to substitute critical systems, they might just replace all the people around those systems.
Like, we won’t want an AI as the failsafe for a nuclear plant. But we might prefer an AI as the the “person” in charge of this failsafe.
Current generations aren’t even close to that rate, and it’s unclear if it’s economical or even possible to fix the deep structural issues of our current Gen LLMs.
My professional experience with LLMs is that they don’t even approach 20% accuracy for a field as ridiculously structured as programming.
They’re just helpful enough to not be a hindrance.
Not too mention plenty of humans are 99% accurate