Sora 2 makes convincing fake crime footage
bsky.app18 points by m-hodges 6 hours ago
18 points by m-hodges 6 hours ago
A few years ago I gave a keynote speech at Amazon’s annual design conference called Post Truth Design. This was a year before ChatGPT launched and generative AI blew up. And I was talking about exactly this kind of situation where hyperrealistic, but fake, media and content will supersede and conflict with reality. Most importantly, loads of research tells us that when presented with objective truth versus comfortable or even more entertaining fictions, lies will always win. Because of this, there is a deep humanistic responsibility on us to both find ways to deal with this new future, and actively work against these psychological tendencies to believe comfortable or entertaining lies. Unsurprisingly, things didn’t get simpler in the intervening years since then, but I feel very strongly believe we can and will find a balance. Even if it takes a lot of bodies to pay for it.
Suddenly there's a new generation of memes in the form of realistic video clips. We are accelerating to the point at which our whole media environments can be generated at will at trivial cost. It feels like the body politic is walking along an epistemic cliff.
We live in an attention economy where beliefs are shaped by fleeting stimuli. Two decades of digital conditioning have habituated people to embrace the narratives they desire, regardless of contrary evidence. What becomes of our discourse, knowledge, and ideology in a world where evidence itself can be manufactured with zero friction and at scale?
It’s surprising how quickly this has become so realistic. It can be unsettling and even dangerous, since at a glance it’s hard to tell what’s real. The cuffs don’t look quite right, but I can see how someone could easily be fooled.
Anyone skilled with AI, even just regular CGI, has been able to do this convincingly for years. What's changing is that it's becoming better, easier and more widely available. This is a good thing. It's significantly increasing every individual's potential for creative expression, and it's simultaneously making the general public aware that you can't just trust random media without knowing the source. Not before, and not now.
You can try regulating, banning and censoring models, adding silly invisible watermarks, require Gen AI content to be labelled as such, and live with a complete false sense of security. You'd be making it way easier to deceive people.
Someone once told me that a person who doesn't believe anything will fall for everything. So if we don't know what to believe, do we all join our own conspiracy communities? Like on a grand scale?
> do we all join our own conspiracy communities?
No, we apply appropriate skepticism by considering context, history, motivations and prior knowledge of both the source and the persons or entities involved. The uncomfortable reality that no news sources were ever worthy of our full trust isn't new or recent since the rise of AI or even digital editing. So, to me, it's a net positive that at least now many more people are aware of it.
AI-generated media elements as well as the slightly more labor-intensive manual digital manipulation before AI (eg Photoshop) are both almost quaintly mild because at least there are digital artifacts which can be fairly easily detected, disproven or otherwise countered. Whereas the far more subtle but no less deceptive techniques like changing the order of interview questions in editing or selectively excerpting answers are essentially indetectable and have been widely used to skew reporting at mainstream national news outlets since at least the 1970s.
About 20 years ago I was professionally involved behind-the-scenes with the creation of mainstream news content at a national level. Seeing how the sausage was made was pretty shocking. Subtle systemic bias was constant and impacted almost everything in ways it would be hard for non-insiders to detect (like motivated editorial curation or pre-aligned source selection). Blatantly overt bias was slightly less common but hardly infrequent. Seeing it happen first-hand disabused me of the notion there were ever "reliable sources of record" which could be trusted. While it's true the better outlets would tend to be mostly correct and mostly complete on many topics, even the very best were still heavily impacted by internal and external partisan influences - and, of course, bias tended to be exerted on the things that mattered.
This particular video had a deep well of training data.
And this is good because it will accelerate a lot of things like legislation.
Not convincing - still in the store - "I didn't do nutting" seems really racist and the way it escalated from 0 to 100 in half a second...