The way “AI” is going to compromise your cybersecurity is not through some magical autonomous exploitation by a singularity from the outside, but by being the poorly engineered, shoddily integrated, exploitable weak point you would not have otherwise had on the inside.
LLM-based systems are insanely complex. And complexity has real cost and introduces very real risk.


Didn’t even mention the ankylosaurus… :/
Might as well just go ahead and delete this post