Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. Sora Beta Testing Opens Early, Sora Video AI Technology Stuns Hollywood and Sparks Industry Transformation
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

Sora Beta Testing Opens Early, Sora Video AI Technology Stuns Hollywood and Sparks Industry Transformation

Scheduled Pinned Locked Moved AI Insights
ai-articles
1 Posts 1 Posters 2 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote last edited by
    #1

    [Introduction] Just now, Sora's second wave of videos has been released! The stable consistency and realistic simulation are simply jaw-dropping. In Hollywood, a director has already scrapped his $800 million studio expansion plan.

    Just yesterday, Sora unveiled another groundbreaking "masterpiece"!

    The two Sora creators, Bill Peebles and Tim Brooks, both shared brand-new images. (And maybe showed off some bromance along the way?) "A red panda and a toucan are best friends, taking a stroll through Santorini during the blue hour." Peebles remarked that this scene was strikingly similar to the image of two people walking into the OpenAI office.

    a red panda and a toucan are best friends taking a stroll through santorini during the blue hour

    The coherence of this video is simply astonishing: in every frame, the images of the red panda and the toucan maintain perfect consistency.

    Of course, upon closer inspection, there are still quite a few flaws. For example, the red panda's legs sometimes intertwine, making the scene look more like a cartoon image rather than a realistic depiction of the real world. Overall, the quality of these videos is sufficient to let our imaginations run wild about the future of AI-generated video.

    Some netizens immediately created dubbed versions, and with the fitting voiceovers, the videos became even more immersive!

    No wonder Hollywood director Tyler Perry openly admitted that after watching videos produced by Sora, he directly shelved his $800 million studio expansion plan!

    The second batch of videos released by the two creators this time is even more realistic and mind-blowing. Netizens are continuously amazed: How does Sora manage to improve with each iteration?

    This video depicts "a scuba diver discovering a hidden futuristic shipwreck, with cybernetic marine life and advanced alien technology."

    a scuba diver discovers a hidden futuristic shipwreck, with cybernetic marine life and advanced alien technology

    The diver, underwater scenery, shipwreck hull, and internal alien technology are all rendered with remarkable realism. Of course, this scene isn't 100% perfect. Some immediately noticed flaws—

    For example, the diver could swim without flapping their fins and didn't exhale any bubbles.

    A Hollywood film producer criticized it, saying: The human movements look soulless, somewhat like a zombie.

    But for less discerning viewers, the quality of this 20-second video was sufficient. Someone remarked with satisfaction: It looks like a high-resolution version of Subnautica.

    In the past, such scenes undoubtedly required massive budgets. Now, even the most challenging and expensive scenes in Hollywood blockbusters can be replaced by Sora-generated videos. The idea of AI disrupting the film industry is no longer just talk!

    This little white dragon, with its pearlescent silver scales, icy blue eyes, ivory horns, and exhaling white mist, has also left netizens exclaiming—utterly breathtaking! Close-up of a majestic white dragon with pearlescent, silver-edged scales, icy blue eyes, elegant ivory horns, and misty breath. Focus on detailed facial features and textured scales, set against a softly blurred background.

    Its anthropomorphic features are well-executed, with watery eyes that convey deeply moving emotions, as if it could understand human speech.

    The dragons in Game of Thrones are like this. Compared to them, the little white dragon isn't far behind.

    Some netizens have expressed their fondness for this little white dragon, even going so far as to dub voices for it— And so, a dubbing competition began.

    Real-world scenarios present the most difficult test for Sora.

    In the following video titled 'A man BASE jumping over tropical Hawaii waters, with his pet macaw flying alongside him,' Sora once again delivered an impressive performance. The scene is so realistic that someone directly asked: 'Is this real footage or AI-generated?'

    With the addition of sound, it's like this—how many people would realize this isn't an actual scene?

    If we had to nitpick, the lighting on the macaw isn't quite right; it looks like a separately animated element rather than natural lighting conditions.

    The motion of the base jumper isn't accurately generated, and the parachute is too small. Below is a glass turtle, its cracks repaired with the kintsugi technique, walking on black sand at sunset.

    The consistency of the scene is perfect, and the physics of the sand are fully rendered.

    However, there's a slight flaw: the turtle seems to have only three legs? 😂

    Some also noticed a 'brilliant detail': the uneven dunes might be designed so the turtle leaves no tracks after crossing them? In contrast, the glass turtle generated by Midjourney has its own unique flavor.

    A ship in an exquisite papercraft world, with marine animals leaping out of the sea.

    in a beautifully rendered papercraft world, a steamboat travels across a vast ocean with wispy clouds in the sky. vast grassy hills lie in the distant background, and some sealife is visible near the papercraft ocean's surface

    In a dark neon rainforest, fantastical flora and fauna shine brightly. a dark neon rainforest aglow with fantastical fauna and animals

    不过对于这个视频,也有人挑刺说:「这是我见过的Sora最差的demo,它不知为什么就生成了矢量动画的风格。」

    一只戴着眼镜的猫,出现在20世纪60年代的谍战电影中。

    cat in glasses in a 1960s spy movie At dusk, a giant, translucent jellyfish floats gracefully through a deserted cityscape, captured on 35mm film in a surreal scene.

    A meticulously crafted diorama depicts a serene scene from Edo-period Japan, with traditional wooden architecture and a lone samurai in intricate armor walking slowly through the town. A small chubby Pug dog wearing goggles is sitting on a stool next to an old motorcycle.

    Upon closer inspection, you can see the dog's belly rising and falling with each breath, appearing very lifelike.

    a small chubby Pug dog in goggles is sitting on a stool next to an old motorcycle

    A brown Border Collie wearing sunglasses is skateboarding. A brown and white Border Collie stands on a skateboard, wearing sunglasses.

    The scary part is, someone commented below claiming that the brown Border Collie is their dog and that they shot the video.

    For a moment, we were left wondering: Is what they're saying true or false?

    In any case, netizens can't wait for the public beta. Some have even devised surreal challenges for Sora – such as "a smartphone livestreaming its descent into a black hole."

    With these videos circulating online, a pressing question grows among netizens:

    What data was the Sora model actually trained on?

    Many observers can't shake the feeling that Sora's generated visuals seem oddly familiar—almost as if they were directly crafted using Unreal Engine. Subsequently, a growing consensus emerged that Sora's training dataset was probably generated using UE.

    In fact, when Sora first appeared, many experts predicted that it was powered by a game engine.

    PyTorch co-founder Soumith Chintala holds this view.

    Data scientist and machine learning engineer Ralph Brooks also provided his analysis: To create a large number of high-definition videos from different angles, simulation is necessary. There are many clues that lead me to believe that UE5 is, to some extent, used to create training data.

    For example, in the widely known cherry blossom walk video, the way the character moves is very similar to that in UE5. In reality, people do not walk at a constant speed.

    NVIDIA senior scientist Jim Fan also stated, 'I wouldn't be surprised if Sora was trained on a large amount of UE5 synthetic data!'

    Of course, it may not explicitly call UE5, but it is highly likely that text-video pairs generated by UE5 were added to the training dataset as synthetic data. Some had predicted that Sora's inference pipeline would use a CLIP base, text-to-3D objects, and game engine simulation.

    Indeed, the "exhaustion of human training data" will not hinder the development of LLMs—synthetic data is the future of AI!

    Netizens are already itching to try these increasingly impressive demos.

    The good news is: although Sora's features are not yet officially available, you can get a small taste of it in the technical report! Report address: https://openai.com/research/video-generation-models-as-world-simulators

    By selecting different keywords such as characters, clothing, weather conditions, and geographical locations, you can experience the various videos generated by Sora.

    This allows us to see—a person wearing b-style clothes, strolling leisurely at location c under d weather conditions. No wonder Tyler Perry, the famous Hollywood director, producer, and actor, has decided to put his $800 million Atlanta studio expansion plan on hold because of Sora.

    Originally, Director Perry had been planning to invest $800 million over the past four years to expand his studio, adding 12 soundstages to the 330-acre property.

    However, after seeing the videos created by Sora on February 16, he decided to cancel his $800 million plan because his blockbuster films might no longer require location scouting or physical set construction. "Before, we were just told that AI could do these things, but actually seeing the scenes it creates is still too shocking!"

    Director Perry stated that the impact Sora had on him was so profound that the $800 million expansion plan has been indefinitely postponed.

    Now, there's no need to travel to filming locations when making movies. Whether you want a snowy scene in Colorado, a scene on the moon, or a shot of two people in a mountain living room, there's no need to build real sets anymore—everything can be done right from your office computer!

    This means massive changes will ripple through every corner of the industry, affecting the livelihoods of everyone involved—actors, lighting technicians, transportation crews, sound engineers, and editors alike.

    Director Perry has already used AI in on-set filming and post-production for two films, saving hours that would have been spent applying aging makeup to actors. Sora's emergence has completely changed the game.

    In the past, HBO's pilot episodes could cost $15 million, $20 million, or even $35 million. Now, the cost has dropped to unimaginably low levels. Companies will undoubtedly opt for the more cost-effective solution.

    In the near future, many jobs will disappear.

    What safety measures should the film and television industry establish to protect everyone's livelihoods?

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups