OpenAI Unveils Sora, an AI-Powered Text-to-Video Generator Capable of Creating One-Minute-Long Clips

OpenAI Unveils Sora, an AI-Powered Text-to-Video Generator Capable of Creating One-Minute-Long Clips

OpenAI, the company behind ChatGPT, introduced its first artificial intelligence (AI)-powered text-to-video generation model Sora on Thursday. The company claims it can generate up to 60-second-long videos. This is longer than any of its competitors in the segment, including Google’s Lumiere, which was unveiled last month. Sora is currently available to red teamers, cybersecurity experts who extensively test software to help companies improve their software, and some content creators. The AI firm also plans to include Coalition for Content Provenance and Authenticity (C2PA) metadata in the future once the model is deployed in an OpenAI product.

Announcing the AI video generator in a post on X (formerly known as Twitter), the company said, “Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions.” Interestingly, the length of the video it claims to generate is more than ten times of what its rivals offer. Google’s Lumiere can generate 5-second-long videos, whereas Runway AI and Pika 1.0 can generate 4-second and 3-second-long videos, respectively.

The X account of OpenAI and CEO Sam Altman also shared multiple videos generated by Sora, along with the prompts used to create them. The resulting videos appear highly detailed with seamless motion, something other video generators in the market have somewhat struggled with. As per the company, it can generate complex scenes with multiple characters, multiple camera angles, specific types of motion, and accurate details of the subject and background. This is possible because the text-to-video model uses both the prompt as well as “how those things exist in the physical world.”

Sora is essentially a diffusion model which uses a transformer architecture similar to GPT models. Similarly, the data it consumes and generates is represented in a term called patches, which is again akin to tokens in text-generating models. Patches are collections of videos and images, bundled in small portions, as per the company. Using this visual data enabled OpenAI to train the video generation model in different durations, resolutions and aspect ratios. In addition to text-to-video generation, Sora can also take a still image and generate a video from it.

However, it is not without flaws either. OpenAI stated on its website, “The current model has weaknesses. It may struggle with accurately simulating the physics of a complex scene, and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterwards, the cookie may not have a bite mark.”

To ensure the AI tool is not used for creating deepfakes or other harmful content, the company is building tools to help detect misleading content. It also plans to use C2PA metadata in the generated videos, after adopting the practice for its DALL-E 3 model recently. It is also working with red teamers, especially domain experts in areas of misinformation, hateful content, and bias, to improve the model.

At present, it is only available to the red teamers and a small number of visual artists, designers, and filmmakers to gain feedback about the product.


Affiliate links may be automatically generated – see our ethics statement for details.



Check out our Latest News and Follow us at Facebook

Original Source

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *