Meta takes on Google, OpenAI with new text to video generation model. Here’s how Movie Gen works

Facebook owner Meta on Friday unveiled a new AI model called Movie Gen that can produce realistic-looking videos using only text prompts. Meta’s announcement comes almost 6 months after OpenAI took the AI industry by storm by unveiling its text-to-video generator, Sora.

Movie Gen can also generate background music and sound effects that are synchronised with the video content. The AI tool can generate up to 16 seconds of video (at 16 frames per second) in various aspect ratios and up to 45 seconds of audio.

The company also shared data from various blind tests where Movie Gen outperformed its other competitors in the segment, such as Runaway Gen 3, OpenAI’s Sora and Kling 1.5.

Movie Gen can also create custom videos by using an image or video of a person to create a video that features that person in ‘rich visual detail’ while preserving human identity and movement. Meta says Movie Gen can also be used to edit existing videos by adding different transitions or effects. In a video shared on Meta’s blog post, Movie Gen was able to add clothes to animals, change the background of the video, change the style of the video and add elements that weren’t there before.

Meta Chief Product Officer Chris Cox while sharing update about Movie Gen in a post on Threads wrote, “We’re sharing our progress today on Movie Gen, our project to develop the state of the art for AI video generation. As of today our evals show it’s industry-leading on text-to-video quality across a number of dimensions, with 16s of continuous length, plus a leap forward for the state of the art on video-matched audio, precise editing, and character consistency / personalization”

Cox also confirmed that Movie Gen isn’t ready to be released to the public yet because the model is ‘still expensive and generation time is too long’. 

Leave a Comment