Sora 2

openai.com

843 points by skilled a day ago


Video: https://www.youtube.com/watch?v=gzneGhpXwjU

System card: https://openai.com/index/sora-2-system-card/

the_duke - a day ago

I haven't seen comments regarding a big factor here:

It seems like OpenAI is trying to turn Sora into a social network - TikTok but AI.

The webapp is heavily geared towards consumption, with a feed as the entry point, liking and commenting for posts, and user profiles having a prominent role.

The creation aspect seems about as important as on Instagram, TikTok etc - easily available, but not the primary focus.

Generated videos are very short, with minimal controls. The only selectable option is picking between landscape and portrait mode.

There is no mention or attempt to move towards long form videos, storylines, advanced editing/controls/etc, like others in this space (eg Google Flow).

Seems like they want to turn this into AITok.

Edit: regarding accurate physics ... check out these two videos below...

To be fair, Veo fails miserably with those prompts also.

https://sora.chatgpt.com/p/s_68dc32c7ddb081919e0f38d8e006163...

https://sora.chatgpt.com/p/s_68dc3339c26881918e45f61d9312e95...

Veo:

https://veo-balldrop.wasmer.app/ballroll.mp4

https://veo-balldrop.wasmer.app/balldrop.mp4

Couldn't help but mock them a little, here is a bit of fun... the prompt adherence is pretty good, at least.

NOTE: there are plenty of quite impressive videos being posted, and a lot of horrible ones also.

davidmurdoch - 17 hours ago

I just asked GPT 5 to generate an image of as person. I then asked it to charge the color of their shirt. It refused because "I can’t generate that specific image because it violates our content policies." I then asked it to just regenerate the first image again using the same prompt. It replied "I know this has been frustrating. You’ve been really clear about what you want, and it feels like I’m blocking you for no reason. What’s happening on my side is that the image tool I was using to make the pictures you liked has been disabled, so even if I write the prompt exactly the way you want, I can’t actually send it off to generate a new image right now."

If I start a new chat it works.

I'm a Plus subscriber and didn't hit rate limits.

This video gen tool will probably be even more useless.

mscbuck - 19 hours ago

I can't help but see these technologies and think of Jeff Goldblum in Jurassic Park.

My boss sends me complete AI Workslop made with these tools and he goes "Look how wild this is! This is the future" or sends me a youtube video with less than a thousand views of a guy who created UGC with Telegram and point and click tools.

I don't ever think he ever takes a beat, looks at the end product, and asks himself, "who is this for? Who even wants this?", and that's aside from the fact that I still think there are so many obvious tells with this content that make you know right away that it is AI.

simonw - a day ago

The main lesson I learned from the March ChatGPT image generation launch - which signed up 100 million new users in the first week - is that people love being able to generate images of their friends and family (and pets).

I expect the "cameo" feature is an attempt at capturing that viral magic a second time.

saguntum - a day ago

I wonder if they're going to license this to brands for heavily personalized advertisement. Imagine being able to see videos of yourself wearing clothes you're buying online before you actually place the order, instead of viewing them on a model.

If they got the generation "live" enough, imagine walking past a mirror in a department store and seeing yourself in different clothes.

Wild times.

btbuildem - 19 hours ago

They're really playing loose with copyright: you have to actively opt out for them to not use your IP in the generated videos [1]

Tangentially related: it's wild to me that people heading such consequential projects have so little life experience. It's all exuberance and shiny things, zero consideration of the impacts and consequences. First Meta with "Vibes", now this.

1: https://www.gurufocus.com/news/3124829/openai-plans-to-launc...

rushingcreek - a day ago

The most interesting thing by far is the ability to include video clips of people and products as a part of the prompt and then create a realistic video with that metadata. On the technical side, I'm guessing they've just trained the model to conditionally generate videos based on predetermined characters -- it's likely more of a data innovation than anything architectural. However, as a user, the feature is very cool and will likely make Sora 2 very useful commercially.

However, I still don't see how OpenAI beats Google in video generation. As this was likely a data innovation, Google can replicate and improve this with their ownership of YouTube. I'd be surprised if they didn't already have something like this internally.

samuelfekete - 21 hours ago

This is a step towards a constant stream of hyper-personalised AI generated content optimised for max dopamine.

kveykva - a day ago

The example prompt "intense anime battle between a boy with a sword made of blue fire and an evil demon demon" is super clearly just replicating Blue Exorcist https://en.m.wikipedia.org/wiki/Blue_Exorcist

cogman10 - 20 hours ago

I've seen a lot of "this is impressive" but I'm not really seeing it. This looks to suffer from all the same continuity problems other AI videos suffer from.

What am I looking at that's super technically impressive here? The clips look nice, but from one cut to the next there's a lot of obvious differences (usually in the background, sometimes in the foreground).

TechSquidTV - 4 hours ago

Not related to Sora but, I have been looking for / hoping for an AI powered motion tracking solver. I've used Blender and Mocha in AE and both still require quite a bit of manual intervention, even in very simple scenes.

I saw some promnise with the Segment Anything model but I haven't seen anyone yet turn it into a motion solver. In fact I'm not sure if can do that at all. It may be that we need to use an AI algorithm to translate the video into a more simple rendition (colored dots representing the original motion) that can then be tracked more traditionally.

baalimago - 12 hours ago

They can't even be consistent within their own launch video. Consistency is by far the biggest issue with generative AI. How can a professional studio work with scenes which has continuity errors on every single shot? And if it's not targeting professionals, who is it for?

TheAceOfHearts - a day ago

Really impressive engineering work. The videos have gotten good enough that they can grab your attention and trigger a strong uncanny valley feeling.

I think OpenAI is actually doing a great job at easing people into these new technologies. It's not such a huge leap in capabilities that it's shocking, and it helps people acclimate for what's coming. This version is still limited but you can tell that in another generation or two it's going to break through some major capabilities threshold.

To give a comparison: in the LLM model space, the big capabilities threshold event for me came with the release of Gemini 2.5 Pro. The models before that were good in various ways, but that was the first model that felt truly magical.

From a creative perspective, it would be ideal if you could first generate a fixed set of assets, locations, and objects, which are then combined and used to bring multiple scenes to life while providing stronger continuity guarantees.

etrvic - 8 hours ago

In light of some comments and videos here, I’d like to morbidly announce that I can no longer distinguish between AI videos and real ones. However, I’ll take this as an opportunity to move from short-form content to long-form, since it seems that space hasn’t yet been hijacked by AI.

gorgoiler - a day ago

Impressively high level of continuity. The only errors I could really call out are:

1/ 0m23s: The moon polo players begin with the red coat rider putting on a pair of gloves, but they are not wearing gloves in the left-vs-right charge-down.

2/ 1m05s: The dragon flies up the coast with the cliffs on one side, but then the close-up has the direction of flight reversed. Also, the person speaking seemingly has their back to the direction of flight. (And a stripy instead of plain shirt and a harness that wasn’t visible before.)

3/ 1m45s: The ducks aren't taking the right hand corner into the straightaway. They are heading into the wall.

I do wonder what the workflow will be for fixing any more challenging continuity errors.

willahmad - a day ago

I wonder about the implications of this tech.

State of the things with doom scrolling was already bad, add to it layoffs and replacing people with AI (just admit it, interns are struggling competing with Claude Code, Cursor and Codex)

What's coming next? Bunch of people, with lots of free time watching non-sense AI generated content?

I am genuinely curious, because I was and still excited about AI, until I saw how doom scrolling is getting worse

adidoit - a day ago

Impressive tech. Don't love the likely societal implications.

SeanAnderson - a day ago

Sheeeeeeeeeeesh. That was so impressive. I had to go back to the start and confirm it said "Everything you're about to see is Sora 2" when I saw Sam do that intro. I thought there was a prologue that was native film before getting to the generated content.

minimaxir - a day ago

OpenAI apparently assumes that the primary users of Sora 2/the Sora app will be Gen Z, especially with the demo examples shown in the livestream. If they are trying to pull users from TikTok with this, it won't work: there's some nuance to Gen Z interests than being quirky and random, and if they did indeed pull users from TikTok then ByteDance could easily include their own image/video generators.

Sora 2 itself as a video model doesn't seem better than Veo 3/Kling 2.5/Wan 2.2, and the primary touted feature of having a consistent character can be sufficiently emulated in those models with an input image.

haolez - a day ago

One use that occurred to me is that fans will be able to "fix" some movies that dropped the ball.

For example, I saw a lot of people criticizing "Wish" (2023, Disney) for being a good movie in the first half, and totally dropping the ball in the last half. I haven't seen it yet, but I'm wondering if fans will be able to evolve the source material in the future to get the best possible version of it.

Maybe we will even get a good closure for Lost (2004)!

(I'm ignoring copyright aspects, of course, because those are too boring :D)

simonw - a day ago

Anyone with access able to confirm if you can start this with a still image and a prompt?

The recent Google Veo 3 paper "Video models are zero-shot learners and reasoners" made a fascinating argument for video generation models as multi-purpose computer vision tools in the same way that LLMs are multi-purpose NLP tools. https://video-zero-shot.github.io/

It includes a bunch of interesting prompting examples in the appendix, it would be interesting to see how those work against Sora 2.

I wrote some notes on that paper here: https://simonwillison.net/2025/Sep/27/video-models-are-zero-...

mdrzn - a day ago

If this is anything near the demo they have been released, this seems incredibly good at physics. Wow. Can't wait to try the new app.

rd - a day ago

https://apps.apple.com/us/app/sora-by-openai/id6744034028

App link

edit: CBN80W for an invite code

stan_kirdey - a day ago

That could totally power next generation of green-screen techs. Generative actors may not find favorable response in the audiences; but SFX, decor, extras, environments that react to actors' actions - amazing potential.

seydor - a day ago

Since Agi is cancelled, at least we have shopping and endless video

jablongo - a day ago

Sam Altman has made (for me) encouraging statements in the past about short-form video like TikTok being the best current example of misaligned AI. While this release references policies to combat "Doomscrolling and RL-sloptimization", it's curious that OpenAI would devote resources to building a social app based on AI generated short form video, which seems to be a core problem in our world. IMO you can't tweak the TikTok/YouTube shorts format and make it a societal good all of a sudden, especially with exclusively AI content. This is a disturbing development for Altman's leadership, and sort of explains what happened in 2023 when they tried to remove him... -> says one thing, does the opposite.

qoez - a day ago

I know the comments here are gonna be negative but I just find this so sick and awesome. Feels like it's finally close to the potential we knew was possible a few years ago. Feels like a pixar moment when CG tech showed a new realm of what was possible with toy story

minimaxir - 17 hours ago

This Sora 2 generation of Cyberpunk 2077 gameplay managed to reproduce it extremely closely, which is baffling: https://x.com/elder_plinius/status/1973124528680345871

> How the FUCK does Sora 2 have such a perfect memory of this Cyberpunk side mission that it knows the map location, biome/terrain, vehicle design, voices, and even the name of the gang you're fighting for, all without being prompted for any of those specifics??

> Sora basically got two details wrong, which is that the Basilisk tank doesn't have wheels (it hovers) and Panam is inside the tank rather than on the turret. I suppose there's a fair amount of video tutorials for this mission scattered around the internet, but still––it's a SIDE mission!

Everyone already assumed that Sora was trained on YouTube, but "generate gameplay of Cyberpunk 2077 with the Basilisk Tank and Panam" would have generated incoherent slop in most other image/video models, not verbatim gameplay footage that is consistent.

For reference, this is what you get when you give the same prompt to Veo 3 Fast (trained by the company that owns YouTube): https://x.com/minimaxir/status/1973192357559542169

mavamaarten - 7 hours ago

Ugh. While technically extremely impressive, I'm so tired of the slop. Every AI content generation tool should have a watermarking system in place, and sites like YouTube should have a way to filter out AI generated content from search results with the press of a button.

Ever since the launch of Veo, there's already so much AI slop videos on YouTube that it becomes hard to find real videos sometimes.

I'm tired, boss.

echelon - a day ago

I'm a software engineer and hobbyist actor/director. My friends are in the film industry and are in IATSE and SAG-AFTRA. I've made photons-on-glass films for decades, and I frequently film stuff with my friends for festivals.

I love this AI video technology.

Here are some of the films my friends and I have been making with AI. These are not "prompted", but instead use a lot of hand animation, rotoscoping, and human voice acting in addition to AI assistance:

https://www.youtube.com/watch?v=H4NFXGMuwpY

https://www.youtube.com/watch?v=tAAiiKteM-U

https://www.youtube.com/watch?v=7x7IZkHiGD8

https://www.youtube.com/watch?v=Tii9uF0nAx4

Here are films from other industry folks. One of them writes for a TV show you probably watch:

https://www.youtube.com/watch?v=FAQWRBCt_5E

https://www.youtube.com/watch?v=t_SgA6ymPuc

https://www.youtube.com/watch?v=OCZC6XmEmK0

I see several incredibly good things happening with this tech:

- More people being able to visually articulate themselves, including "lay" people who typically do not use editing software.

- Creative talent at the bottom rungs being able to reach high with their ambition and pitch grand ideas. With enough effort, they don't even need studio capital anymore. (Think about the tens of thousands of students that go to film school that never get to direct their dream film. That was a lot of us!)

- Smaller studios can start to compete with big studios. A ten person studio in France can now make a well-crafted animation that has more heart and soul than recent by-the-formula Pixar films. It's going to start looking like indie games. Silksong and Undertale and Stardew Valley, but for movies, shows, and shorts. Makoto Shinkai did this once by himself with "Voices of a Distant Star", but it hasn't been oft repeated. Now that is becoming possible.

You can't just "prompt" this stuff. It takes work. (Each of the shorts above took days of effort - something you probably wouldn't know unless you're in the trenches trying to use the tech!)

For people that know how to do a little VFX and editing, and that know the basic rules of storytelling, these tools are remarkable assets that compliment an existing skill set. But every shot, every location, every scene is still work. And you have to weave that all into a compelling story with good hooks and visuals. It's multi-layered and complex. Not unlike code.

And another code analogy: think of these models like Claude Code for the creative. An exoskeleton, but not the core driving engineer or vision that draws it all together. You can't prompt a code base, and similarly, you can't prompt a movie. At least not anytime soon.

msp26 - a day ago

The voice quality in the generated vids is surprisingly awful.

polishdude20 - a day ago

There's something about the faces that looks completely off to me. I think it's the way the mouth and whole face moves when they talk.

jack_riminton - 9 hours ago

Lets take a step back and realise how incredible this is (I'm sure there are plenty of other `ackshually` comments)

Can it do Will Smith eating spaghetti? (I can't get access in UK)

neom - a day ago

Going to be an amazing source of training data, wait till they get it to real time and people are leaving their video camera open for AR features. OpenAI is about to have a lot of current real world image data, never mind the sentiment analysis.

tminima - 10 hours ago

I feel that this is a data collection activity (and thus, more advanced future models and usecases) disguised as a social media. People will provide feedback in the form of clicks/views on AI generated content (better version of RLHF) on unverified/subjective domains.

Biggest problem OpenAI has is not having an immense data backbone like Meta/Google/MSFT has. I think this is step in that direction -- create a data moat which in turn will help them make better models.

Aeolun - 19 hours ago

Clicking a link on the OpenAI dashboard and beeing greeted with a full page of scandily clad women was certainly not what I expected to see when opening Sora..

neilv - a day ago

> And we're introducing Cameo, giving you the power to step into any world or scene, and letting your friends cast you in theirs.

How much are they (and providers of similar tools) going to be able to keep anyone from putting anyone else in a video, shown doing and saying whatever the tool user wants?

Will some only protect politicians and celebrities? Will the less-famous/less-powerful of us be harassed, defamed, exploited, scammed, etc.?

causal - a day ago

IDK if the site is being hugged to death but I can only load the first video. Even in just one viewing there were noticeable artifacts, so my impression is that Veo is still in the lead here.

darkwater - a day ago

Last famous words:

> A lot of problems with other apps stem from the monetization model incentivizing decisions that are at odds with user wellbeing. Transparently, our only current plan is to eventually give users the option to pay some amount to generate an extra video if there’s too much demand relative to available compute. As the app evolves, we will openly communicate any changes in our approach here, while continuing to keep user wellbeing as our main goal.

jug - 19 hours ago

I feel so bad for the climate now.

sys32768 - a day ago

I welcome a world where gullible people begin to doubt everything they see.

nycdatasci - 21 hours ago

What makes TikTok fun is seeing actual people do crazy stuff. Sora 2 could synthesize someone hitting five full-court shots in a row, but it wouldn’t be inspiring or engaging. How will this be different than music-generating AI like Suno, which doesn't have widespread adoption despite incredible capabilities?

dagaci - a day ago

Amazing. iOS only, with region restrictions in 2025.

Gnarl - 9 hours ago

Amazing that even Sora2 can't make Sam Altman not look like a w@nker.

jsnell - a day ago

Doing this as a social app somehow feels really gross, and I can't quite put to words why.

Like, it should be preferable to keep all the slop in the same trough. But it's like they can't come up with even one legitimate use case, and so the best product they can build around the technology is to try to create an addictive loop of consuming nothing but auto-generated "empty-calories" content.

clgeoio - a day ago

> Concerns about doomscrolling, addiction, isolation, and RL-sloptimized feeds are top of mind—here is what we are doing about it.

> We are giving users the tools and optionality to be in control of what they see on the feed. Using OpenAI's existing large language models, we have developed a new class of recommender algorithms that can be instructed through natural language. We also have built-in mechanisms to periodically poll users on their wellbeing and proactively give them the option to adjust their feed.

So, nothing? I can see this being generated and then reposted to TikTok, Meta, etc for likes and engagement.

elpakal - 16 hours ago

Wish I was cool enough to have an invite code. Oh well, as an iOS build nerd next best thing I can do is inspect their ipa I guess. Interesting that they have some pretty big duplicate mp4s nobody caught in NoFaceDesignSystemBundle: cameo_onboarding_0.mp4 & create_ifu_1.mp4 | 7.3MB and cameo_onboarding_2.mp4 & create_ifu_0.mp4 | 5.2MB.

Also I find it neat that they still include an iOSMath bundle (in chatGPT too), makes me wonder how good their models really are at math.

FullMetul - 20 hours ago

Maybe by Sora 3 they will have scene consistency. Gah it's so jarring to me that the poll the racing ducks are in just randomly changes. My brain can tell it's not consistent scene to scene and feels so jank.

modeless - a day ago

I can see it being interesting to create wacky fake videos of your friends for a week or two, but why would people still be using this next year?

I watch videos for two reasons. To see real things, or to consume interesting stories. These videos are not real, and the storytelling is still very limited.

robotsquidward - a day ago

It's insanely impressive. At the same time, all these videos all look terrible to me. Still get extreme uncanny valley and literally makes me sick to my stomach.

taikahessu - 5 hours ago

Entering code 123456 reveals Sora 2 is only available in US/Canada region.

joshdavham - a day ago

Will something like Sora 2 actually be used in Hollywood productions? If so, what types of scenes?

I imagine it won’t necessarily be used in long scenes with subtle body language, etc involved. But maybe it’ll be used in other types of scenes?

outlore - a day ago

in a computer graphics course i took, we looked through how popular film stories were tied to the technical achievements of that era. for example, toy story was an story born from the new found ability to render plastics effectively. similarly, the sora video seems to showcase a particular set of slow moving scenes (or when fast, disappearing into fluid water and clouds) which seem characteristic of this technology at the current moment in time

anshumankmr - 17 hours ago

I think someone had called it many months back (and in fact I felt it too) that the feed for Sora seemed very much like a social media app. Then the only thing left was to make it into vertical scrolling with videos and voila you have your tiktok clone.

intended - a day ago

That dragon flew backwards at one point didnt it.

Impressive that THAT was one of the issues to find, given where we were at the start of the year.

alberth - 21 hours ago

Why do you have to download an app to use Sora 2 (vs it being available on the web like ChatGPT)?

fariszr - a day ago

Did they make human voices sound robotic on purpose? Is that some kind of Ai fingerprinting? It's way too obvious

ascorbic - a day ago

This is super cool and fun and will almost certainly be really bad for society in loads of different ways. From the descriptions of all the guardrails they're needing to put in it seems like they know it too.

jp57 - a day ago

Prediction: we'll see at least one Sora-generated commercial at the Super Bowl this year.

ElijahLynn - a day ago

"download the Sora app"

click

takes me to the iPhone app store...

nopinsight - 19 hours ago

OpenAI launches Sora 2 in a consumer app to collect RL feedback en masse and improve their world models further.

Their ultimate goal is physical AGI, although it wouldn’t hurt them if the social network takes off as well.

ashu1461 - a day ago

This is a good comparison thread of capabilities of sora vs sora 2

https://x.com/mattshumer_/status/1973085321928515783

vahid4m - a day ago

While the quality of what I'm seeing is very nice for AI generated content (I still can't believe it) but the fact thay they are mostly showing short clips and not a long connected consistent video makes it less impressive.

mempko - a day ago

It's obvious there is no way OpenAI can keep videos generated by this within their ecosystem. Everything will be fake, nothing real. We are going to have to change the way we interact with video. While it's obviously possible to fake videos today, it takes work by the creator and takes skill. Now it will take no skill so the obvious consequence of this is we can't believe anything we see.

The worst part is we are already seeing bad actors saying 'I didn't say that' or 'I didn't do that, it was a deep fake'. Now you will be able to say anything in real life and use AI for plausible deniability.

sumeruchat - a day ago

Shameless plug but I am creating a startup in this space called cleanvideo.cc to tackle some of the issues that will come with fake news videos. https://cleanvideo.cc

doikor - a day ago

Does this survive panning the camera away for 5 to 10 seconds and then back? Or basic conversation scene with the camera cutting between being located behind either speaker once every few seconds?

Basically proper working persistence of the scene.

gvv - a day ago

Any idea if or when it will be available in EU? https://apps.apple.com/us/app/sora-by-openai/id6744034028

edit: as per usual it's not yet...

Havoc - 20 hours ago

That sure seems to be getting close to something usable for movies...kinda.

Sam looks weirdly like Cillian Murphy in Oppenheimer in some shots. I wonder whether there was dataset bleedover from that.

squidsoup - a day ago

A little tangential to this announcement, but is anyone aware of any clean/ethical models for AI video or image generation (i.e. not trained on copyright work?) that are available publicly?

natiman1000 - 18 hours ago

The fact that no one talking about how it compares against Veo tells me everything I need to know. This page is now filled with some bots!

Lucasoato - 19 hours ago

> this app is not available in your country or region

whimsicalism - a day ago

Find this sort of innovation far less interesting or exciting than the text & speech work, but it seems to be a primary driver of adoption for the median person in a way that text capability simply is not.

tptacek - a day ago

If I was on the OpenAI marketing team I maybe wouldn't have included the phrase "and letting your friends cast you in their [videos]". It's a little chilling.

NoahZuniga - a day ago

TTS is horrible compared to Google's veo 3

VagabundoP - a day ago

I hate this vacant technology tbh. Every video feels like distilled advert mindless slop.

There's still something off about the movements, faces and eyes. Gollum features.

LarsDu88 - a day ago

I really hope they have more granular APIs around this.

One use case I'm really excited about is simply making animated sprites and rotational transformations of artwork using these videogen models, but unlike with local open models, they never seem to expose things like depth estimation output heads, aspect ratio alteration, or other things that would actually make these useful tools beyond shortform content generation.

mempko - a day ago

I predict a re-resurgence in life performances. Live music and live theater. People are going to get tired of video content when everything is fake.

mempko - a day ago

It's obvious there is no way OpenAI can keep videos generated by this within their ecosystem. Everything will be fake, nothing real. We are going to have to change the way we interact with video. While it's obviously possible to fake videos today, it takes work by the creator and takes skill. Now it will take no skill so the obvious consequence of this is we can't believe anything we see.

The worst part is we are already seeing bad actors saying 'I didn't say that' or 'I didn't do that, it was a deep fake'. Now you will be able to say anything in real life and use AI for plausible deniability.

I predict a re-resurgence in life performances. Live music and live theater. People are going to get tired of video content when everything is fake.

- a day ago
[deleted]
alkonaut - a day ago

How far out are we from doing this in real time? What’s the processing/rendering time per frame?

kaicianflone - a day ago

Why is the video player so laggy?

qgin - a day ago

VFX artists are definitely feeling the AGI / considering other career paths today.

d--b - a day ago

Ok that's technically really impressive, and probably totally unusable in a real creativity context beyond stupid ads and politically-motivated deepfakes.

bergheim - a day ago

We are just heading for Lovely All TM.

I kid.

Art should require effort. And by that I mean effort on the part of the artist. Not environmental damage. I am SO tired of non tech friends SWOONING me with some song they made in 0.3 seconds. I tell them, sarcastically, that I am indeed very impressed with their endeavors.

I know many people will disagree with me here, but I would be heart broken if it turned out someone like Nick Cave was AI generated.

And of course this goes into a philosophical debate. What does it matter if it was generated by AI?

And that's where we are heading. But for me I feel effort is required, where we are going means close to 0 effort required. Someone here said that just raises the bar for good movies. I say that mostly means we will get 1 billion movies. Most are "free" to produce and displaces the 0.0001% human made/good stuff. I dunno. Whoever had the PR machine on point got the blockbuster. Not weird, since the studio tried 300 000 000 of them at the same time.

Who the fuck wants that?

I feel like that ship in Wall-E. Let's invest in slurpies.

Anyway; AI is here and all of that, we are all embracing it. Will be interesting to see how all this ends once the fallout lands.

Sorry for a comment that feels all over the place; on the tram :)

colonial - a day ago

Cool - now let's see how much it costs in compute to generate a single clip. (Also, notice how no individual scene is longer than a handful of seconds?)

Josh5 - 21 hours ago

Everyone has the widest eyes in these Sora videos.

beders - a day ago

Can I finally redo the Star Wars sequels with this? :)

barbarr - a day ago

Instagram reels are gonna get crazy

dvngnt_ - a day ago

After using Wan with comfyui, im uninterested in closed platforms. they lack the amount of control even if the quality might be better.

baby - 18 hours ago

No android app right?

- a day ago
[deleted]
andybak - a day ago

I've got used to immediately checking availability. In this case - iPhone app is US + Canada only and the website is invite only.

Going back to sleep. Wake me up when it's available to me.

_ZeD_ - 14 hours ago

Sora 2: Frato

IncreasePosts - a day ago

It's fitting that they host the video on Youtube, since that is where all of their training data came from.

dcreater - 19 hours ago

Matrix here we come!

amelius - 21 hours ago

Nicely cherry-picked.

aaroninsf - a day ago

Someone who doesn't follow the moving edge would be forgiven for being confused by the dismissive criticism dominating this thread so far.

It's not that I disagree with the criticism; it's rather that when you live on the moving edge it's easy to lose track of the fact that things like this are miraculous and I know not a single person who thought we would get results "even" like this, this quickly.

This is a forum frequented by people making a living on the edge—get it. But still, remember to enjoy a little that you are living in a time of miracles. I hope we have leave to enjoy that.

DetroitThrow - a day ago

Just seeing the examples that I assumed are cherry picked, it seems like they're still behind on Google when it comes to video generation, the physics and stylized versions of these shots seem not great. Veo3 was such a huge leap and is still ahead of many of the other large AI labs.

carrozo - a day ago

Sora 2: Sloppy Seconds

boh - a day ago

This is the kind of thing people get excited about for the first couple of months and then barely use it going forward. It's amazing how quickly the novelty of this amazing technology wears off. You realize how necessary meaning/identity/narrative is to media and how empty it gets (regardless of the output) when those elements are missing.

ezomode - 21 hours ago

full-on productisation effort -> no AGI in sight

wltr - 14 hours ago

From watching the video I have an impression that these guys just want to appear cool, and the product looks like that too. To appear to be very cool, for people who won’t ever use it, apparently. Same impression I’ve got from watching that promo with Jony Ive. Beautiful, and don’t you dare to think it through.

fersarr - 21 hours ago

Only iphone...

FrustratedMonky - 17 hours ago

Yeah, we've "plateaued" all right.

mrcino - a day ago

So, this is the AI Slop generator for the AI SlipSlop that Altman has announced lately.

Brave new internet, where humans are not needed for any "social" media anymore, AI will generate slop for bots without any human interaction in an endless cycle.

umrashrf - 20 hours ago

hey @simoncion looks like they are doing this for self-promotion that's against the site's guidelines

bamboozled - 19 hours ago

Soon, you won't even have to do anything to post a video of yourself doing something "interesting" on social media, what at time to be alive.

There would for sure be large swathes of people who would just lie about what they're doing and use AI to make it seem like they're skateboarding, or skiing or whatever at a pro or semi-pro level and have a lot of people watch it.

ambicapter - a day ago

AI Sam Altman is terrifying, holy shit. Squarely in uncanny valley for me.

egeres - a day ago

I wonder how this will affect the large cinema production companies (Disney, WB, Universal, Sony, Paramount, 20th century...). The global film market share was estimated to be 100B in 2023. If the production cost of high FX movies like Avengers Infinity War goes down from 300M$ to just 10K$ in a couple of years, will companies like Disney restrain themselves to just release a few epic movies per year? Or will we be flooded with tons of slop? If this kind of AI content keeps getting better, how will movies sustain our attention and feel 'special'? Will people not care if an actor is AI or real?

bgwalter - a day ago

What is the target market for this? The videos are not good enough for YouTube. They are unrealistic, nauseating and dorky. Already now any YouTube video that contains a hint of "AI" attracts hundreds of scathing comments. People do not want this.

Let me guess, the ultimate market will be teenagers "creating" a Skibidi Toilet and cheap TikTok propaganda videos which promote Gazan ocean front properties.

rvz - a day ago

12,000+ "AI startups" have been obliterated.

LocalH - 14 hours ago

We're cooked.

apetresc - a day ago

If anyone is feeling generous with one of their four invite codes, I'd really appreciate it. I'm at adrian@apetre.sc.

carabiner - a day ago

CEO of Loopt makes a cameo at 1:28 in the youtube vid.

drcongo - a day ago

The AI generated Sam Altman doesn't look even vaguely human.

dyauspitr - a day ago

How did they generate the videos with Sam Altman. Did they just provide a picture of his face and then use him in their prompts?

GaggiX - a day ago

The model's quality is incredible, but more tools are needed to take advantage of its capabilities, this is kinda the magic of open models.

thebiglebrewski - a day ago

Can this be used to make hyper-realistic video games, or it's not that real-time yet?

dolebirchwood - 21 hours ago

This makes me less excited about the future of video, not more.

It's technically impressive, but all so very soulless.

When everything fake feels real, will everything real feel fake?

taytus - a day ago

Honest question: What problem does this solve?

2OEH8eoCRo0 - a day ago

Can it generate an analog clock displaying a given time?

outside1234 - 14 hours ago

This is going to be a disaster. We are never going to be able to trust a video again and in short order propagandists are going to be using this to generate god knows what.

- a day ago
[deleted]
ionwake - a day ago

I think HN is too political like this tech is clearly amazing and it’s great they shipped it there should be more props even if it’s a billion dollar company.

groos - 20 hours ago

What is the point? Who wants to watch these videos?

gainda - a day ago

impressive engineering that's hard to see as a net good for humanity.

it doesn't spark optimism or joy about the future of engaging with the internet & content which was already at a low point.

old is gold, even more so

CSMastermind - a day ago

Anyone have an invite they want to share with me lol.

dragonwriter - a day ago

“With Sora 2, we are jumping straight to what we think may be the GPT‑3.5 moment for video.”

I think feeling like you need to use that in marketing copy is a pretty good clue in itself both that its not, and that you don’t believe it is so much as desperately wish it would be.

basisword - a day ago

Tens of billions in funding and they've just built a modern version of JibJab[1]. Can't wait to start receiving this in reply-all family emails.

[1] https://youtu.be/z8Q-sRdV7SY?si=NjuyzL1zzq6IWPAe

bovermyer - a day ago

"Thou shalt not create a machine in the likeness of a human mind."

deng - a day ago

As usual: impressive until you look close. Just freeze the frame and you see all the typical slop errors: pretty much any kind of writing is a garbled mess (look at the camera in the beginning). The horn of the unicorn sits on the bridle. The buttons on Sam's circus uniform hover in the air. There are candleholders with somehow candles inside as well as on top. The miniature instruments often make no sense. The conductor has 4 fingers on one hand and 5 on the other. The cheers of the audience is basically brown noise. Nedless to say, if you freeze the audience, hands are literally all over the place. Of course, everything conveniently has a ton of motion blur so you cannot see any detail.

I know, I know. Most people don't care. How exciting.

yahoozoo - 20 hours ago

Sam still pretending they’re close to AGI in the trailer lmao

unethical_ban - a day ago

I just had a thought: (spoilers Expanse and Hyperion and Fire Upon the Deep)

Multiple sci-fi-fantasy tales have been written about technology getting so out of control, either through its own doing or by abuse by a malevolent controller, that society must sever itself from that technology very intentionally and permanently.

I think the idea of AGI and transhumanism is that moment for society. I think it's hard to put the genie back in the bottle because multiple adversarial powers are racing to be more powerful than the rest, but maybe the best thing for society would be if every tensor chip disintegrated the moment they came into existence.

I don't see how society is better when everyone can run their own gooner simulation and share it with videos made of their high school classmates. Or how we'll benefit from being unable to trust any photo or video we see without trusting who sends it to you, and even then doubting its veracity. Not being able to hear your spouse's voice on the phone without checking the post-quantum digital signature of their transmission for authenticity.

Society is heading to a less stable, less certain moment than any point in its history, and it is happening within our lifetime.

sudohalt - a day ago

Now videos will be generated on the fly based on your preference. You will never put your phone down, it will detect when your sad or happy and generate videos accordingly

dweekly - a day ago

So a social network that's 100% your friends doing silly AI things?

I feel like this is the ultimate extension of "it feels like my feed is just the artificial version of what's happening my friends and doesn't really tell me anything about how they're actually faring."

m3kw9 - a day ago

I’m eagerly awaiting for some unexpected social problems this crops up

beernet - a day ago

Overall, appears rather underwhelming. Long way to go still for video generation. Also, launching this as a social app seems like yet another desperate try to productize and monetize their tech, but this is the position big VC money forces you into.

S0und - a day ago

I find it comical that OpenAI with all the power of CharGPT even them are unable to release an app for both iOS and Android at the same time. Wow, good marketing for Codex.

dwa3592 - a day ago

I don't know if it's just me or other people are feeling it as well. I don't enjoy videos anymore (unless live sports). I don't enjoy reading on my monitor anymore, I have been going back to physical books more often. I am in my early thirties.

The point is that sora2 demo videos seemed impressive but I just didn't feel any real excitement. I am not sure who this is really helping.

MangoToupe - a day ago

Interesting that they're going with a "copyright opt-out": https://www.reuters.com/technology/openais-new-sora-video-ge...

I guess copyright is pretty much dead now that the economy relies on violating it. Too bad those of us not invested into AI still won't be able to freely trade data as we please....

marcofloriano - a day ago

Every AI video demonstration is always about funny stuff and fancy situations. We never see videos on art, history, literature, poetry, religion (imagine building a video about the moment Jesus was born) ... ducks in a race !? Come on ...

So much visual power, yet so little soul power. We are dying.

ishouldbework - a day ago

[dead]

cindyllm - a day ago

[dead]

mrcwinn - 21 hours ago

[flagged]

mostMoralPoster - 18 hours ago

[dead]

dang154f4g - 12 hours ago

[dead]

ChrisArchitect - a day ago

More discussion: https://news.ycombinator.com/item?id=45428122

pton_xd - a day ago

Someone remind me the benefits of mass produced fake videos again?

animanoir - a day ago

[dead]

ath3nd - a day ago

[dead]

saltyoldman - a day ago

[flagged]

rhetocj23 - a day ago

[flagged]

iLoveOncall - a day ago

Show me a coherent video that lasts more than 5 seconds and was generated with the model and maybe I'll start to care.

mclightning - a day ago

It is very underwhelming. It seems like a step backward. Scam altman should be replaced before he runs the company to bankruptcy.

tonyabracadabra - 19 hours ago

If Sora 2 is aiming for AI‑Tok, ScaryStories Live is the jump-scare cousin: real‑time POV horror from a photo + a sentence. No film school, no GPU farm—just “upload face, pick fear level, go.” It’s less cinema, more haunted mirror, and it ships in seconds. scarystories.live