TL;DR: The new Reimage feature on the Google Pixel 9 phones is really good at AI manipulation, while being very easy to use. This is bad.
This is bad
Some serious old-man-yelling-at-cloud energy
It’ll sink in for you when photographic evidence is no longer admissible in court
Photoshop has existed for a bit now. So incredibly shocking it was only going to get better and easier to do, move along with the times oldtimer.
Photoshop requires time and talent to make a believable image.
This requires neither.
But it has been possible, for more than a decade
You said “but” like it invalidated what I said, instead of being a true statement and a non sequitur.
You aren’t wrong, and I don’t think that changes what I said either.
Well yeah, I’m not concerned with its ease of use nowadays. I’m more concerned with the computer forensics experts not being able to detect a fake for which Photoshop has always been detectable.
As the cat and mouse game continues, we ask ourselves, is water still wet?
Just wait, image manipulation will happen at image creation and there will be no “original”. Proving an image is unmanipulated will be a landmark legal precedent and set the standard for being able to introduce photographic evidence. It is already a problem for audio recordings and will be eventually for video.
We literally lived for thousands of years without photos. And we’ve lived for 30 years with Photoshop.
Except it was way harder to do.
Now call me a “ableist, technophobic, luddite”, that wants to ruin the chance of other people making GTA-like VRMMORPGs from a single line of prompt!
You know that’s not possible right?
if I as an anti-AI person said that, I’d be called out for posting FUD…
What are you talking about lol
I think this is a good thing.
Pictures/video without verified provenance have not constituted legitimate evidence for anything with meaningful stakes for several years. Perfect fakes have been possible at the level of serious actors already.
Putting it in the hands of everyone brings awareness that pictures aren’t evidence, lowering their impact over time. Not being possible for anyone would be great, but that isn’t and hasn’t been reality for a while.
While this is good thing, not being able to tell what is real and what is not would be disaster. What if every comment here but you were generated by some really advanced ai? What they can do now will be laughable compared to what they can do many years from now. And at that point it will be too late to demand anything to be done about it.
Ai generated content should have somekind of tag or mark that is inherently tied to it that can be used to identify it as ai generated, even if only part is used. No idea how that would work though if its even possible.
You already can’t. You can’t close Pandora’s box.
Adding labels just creates a false sense of security.