using a “detector” is how (not all, but a lot of) AIs (LLMs, GenAI) are trained:
have one AI that’s a “student”, and one that’s a “teacher” and pit them against one another until the student fools the teacher nearly 100% of the time. this is what’s usually called “training” an AI.
one can do very funny things with this tech!
for anyone that wants to see this process in action, here’s a great example:
afaik, there actually aren’t any reliable tools for this.
the highest accuracy rate I’ve seen reported for “AI detectors” is somewhere around 60%; barely better than a random guess…
edit: i think that way for text/LLM, to be fair.
kinda doubt images are much better though…happy to hear otherwise, if there are better ones!
The problem is any AI detector can be used to train AI to fool it, if it’s publicly available
exactly!
using a “detector” is how (not all, but a lot of) AIs (LLMs, GenAI) are trained:
have one AI that’s a “student”, and one that’s a “teacher” and pit them against one another until the student fools the teacher nearly 100% of the time. this is what’s usually called “training” an AI.
one can do very funny things with this tech!
for anyone that wants to see this process in action, here’s a great example:
Benn Jorda: Breaking The Creepy AI in Police Cameras
https://en.wikipedia.org/wiki/Generative_adversarial_network