Your eyes are fine. It’s AI that can’t be trusted.
Is this going to kill Onlyfans?
Or is the market decidedly because Onlyfans is about personal creators and thus it’s more meaningful than porn?
But when short AI videos become so good you can’t tell if you’re being catfished, will it feel the same?
To be fair, if anyone was going to kill Onlyfans, it was Onlyfans. They haven’t yet managed it.
videos need to be cryptographically signed and able to be verified. all news outlets should do this.
Nothing is true
Everything is permittedI’m just holding out minor hope that people finally get with the program and realize the value of reputable news organizations and plain old grapevine again. Leave internet for nerds.
Videos are now basically have the same weights as words, no longer a “smoking gun”. Videos basically become like eyewitness testimony, well… its slightly better as it protect against misremembering or people with inadequate lexicon and unable to clearly articulate what they saw. The process wil become: get the witness to testify they had posession of the camera, was recording at the time of incident, and they believe the video being presented in court is genuine and have not been altered, then its basically a video version of their eyewitness testimony. The credibility of the video is now tied to the witness/camera-person’s own credibility, and should not be evaluated as an independent evidence, but the jury should treat the video as the witnese’s own words, meaning, they should factor in the possibility the witness faked it.
A video you see on the internet is now just as good as just a bunch of text, both equally unreliable.
We live in a post-truth world now.
I’m just thinking, people thought Americans were faking the moon landing, we’ve always had conspiracy theorists. AI just spins them faster and sloppier, let’s go back to humans lying to humans than a computer taught to lie and advertise by humans to do the same thing
And that’s perfect, that’s the world that made all the due process and similar things evolve.
There’s never been such a thing as independent evidence. The medium has always mattered. And when people started believing this is no more true, we’ve almost gotten ourselves a new planetary fascist empire, I hope we’re still in time to stop that.
A hacker may have replaced the authentic video in the phone. The edit must be unnoticeable to the eyewitness who shot it.
If there’s an edit that alters a detail that doesn’t matter to the witness, it probably isn’t important. And that kind of replacement is hard to do at scale without getting caught.
Maybe the NYT’s headline writers’ eyes weren’t that great to begin with?
The tech could represent the end of visual fact — the idea that video could serve as an objective record of reality — as we know it.
We already declared that with the advent of photoshop. I don’t want to downplay the possibility of serious harm being a result of misinformation carried through this medium. People can be dumb. I do want to say the sky isn’t falling. As the slop tsunami hits us we are not required to stand still, throw our hands in the air, and take it. We will develop tools and sensibilities that will help us not to get duped by model mud. We will find ways and institutions to sieve for the nuggets of human content. Not all at once but we will get there.
This is fear mongering masquerading as balanced reporting. And it doesn’t even touch on the precarious financial situations the whole so-called AI bubble economy is in.
The real danger is the failing trust in traditional news sources and the attack on the truth from the right.
People have been believing what they want regardless of if they see it for a long time and AI will fuel that but is not the root of the problem.
Traditional news sources became aggregators of actual news sources and open source Intel, and have made “embellishing” the norm. Stock/reused visuals, speculating minutes into events, etc etc
It is increasingly faked. The right just pretends that means they’re lies that feel “good” are the truth
To no longer be able to trust video evidence is a big deal. Sure the sky isn’t falling, but this is a massive step beyond what Photoshop enabled, and a major powerup for disinformation, which was already winning.
To no longer be able to trust video evidence is a big deal.
except that you still can trust video evidence if you examine the video carefully … for now …
All those tech CEOs met up with Trump makes me think this is a major reason for pouring money in to this technology. Any time Trump says “fake news”, he can just say it is AI.
You couldn’t “trust” video before sora et al. We had all these sightings of aliens and flying saucers - which stopped conveniently having an impact when everybody started carrying cameras around.
There will be a need to verify authenticity and my prediction is that need will be met.
What you end up stuck doing is deciding to trust particular sources. This makes it a lot harder to establish a shared reality
The tech could represent the end of visual fact — the idea that video could serve as an objective record of reality — as we know it.
We already declared that with the advent of photoshop.
I think that this is “video” as in “moving images”. Photoshop isn’t a fantastic tool for fabricating video (though, given enough time and expense, I suppose that it’d be theoretically possible to do it, frame-by-frame). In the past, the limitations of software have made it much harder to doctor up — not impossible, as Hollywood creates imaginary worlds, but much harder, more expensive, and requiring more expertise — to falsify a video of someone than a single still image of them.
I don’t think that this is the “end of truth”. There was a world before photography and audio recordings. We had ways of dealing with that. Like, we’d have reputable organizations whose role it was to send someone to various events to attest to them, and place their reputation at stake. We can, if need be, return to that.
And it may very well be that we can create new forms of recording that are more-difficult to falsify. A while back, to help deal with widespread printing technology making counterfeiting easier, we rolled out holographic images, for example.
I can imagine an Internet-connected camera — as on a cell phone — that sends a hash of the image to a trusted server and obtains a timestamped, cryptographic signature. That doesn’t stop before-the-fact forgeries, but it does deal with things that are fabricated after-the-fact, stuff like this:
Meh we’re not there yet. But the day is coming.
“The Running Man” predicted the future!
🤓 Is this marketing from AI companies? 🦋
Absolutely.