There tend to be three AI camps. 1) AI is the greatest thing since sliced bread and will transform the world. 2) AI is the spawn of the Devil and will destroy civilization as we know it. And 3) “Write an A-Level paper on the themes in Shakespeare’s Romeo and Juliet.”

I propose a fourth: AI is now as good as it’s going to get, and that’s neither as good nor as bad as its fans and haters think, and you’re still not going to get an A on your report.

You see, now that people have been using AI for everything and anything, they’re beginning to realize that its results, while fast and sometimes useful, tend to be mediocre.

My take is LLMs can speed up some work, like paraphrasing, but all the time that gets saved is diverted to verifying the output.

  • t3rmit3@beehaw.org
    link
    fedilink
    arrow-up
    8
    arrow-down
    2
    ·
    edit-2
    21 days ago

    Another article conflating LLMs and AI.

    AI is unfortunately supercharging lots of systems, especially in the police/intelligence spaces. Surveillance driven by AI is absolutely skyrocketing both in capabilities and prevalence.

    xAI and OpenAI aren’t seeing good ROI, being LLM companies. Palantir and their ilk are another beast altogether.

    I almost wonder if this misstated “underperformance” of “AI” is intentionally trying to make people less fearful about it being weaponized against them.

    After all, the AI balloon is deflating, right?