And once again a new video AI sees the light of day – this time from none other than Internet giant Meta. But what features does the tool attract when other companies like OpenAI were already there earlier with Sora? And when can normal users use the program? We took a look at the new video AI Movie Gen and provide you with the most important facts.
First of all, Meta Movie Gen is a video AI. It is similar to Sora, which OpenAI introduced a few months ago. It also mainly uses text input, from which moving images and sounds are then created using artificial intelligence. The examples on the Meta’s websitewhich the video AI announces, are of high quality and show realistic people and videos with very well-coordinated, real-looking music. You can also get an impression of the new AI tool in the following video:
HD quality & various Image formats
According to Meta’s announcement, the AI videos will be created in high definition and can be output in various formats. Initially, however, the resolution is 768×768 pixels – the videos can then be upsampled to give them HD quality.
Videos can be edited later
The video AI didn’t spit out exactly what you expected? Individual scenes don’t reflect the right mood or don’t take place in the right setting? Or you don’t want to create a new video at all, but rather edit an existing one? No problem! Meta has made provisions for situations like this: Movie Gen enables its users to subsequently edit existing moving image material. This means that details of the played video – from style to transitions – can be changed via text prompt as well as your own videos can be uploaded and edited. In addition, photos and images can be used as a basis to create a video. The main purpose of this is to ensure that the AI-generated people in the videos actually look human and have the appropriate facial expressions, gestures and emotions. And thus exit the “uncanny valley” that is often talked about in connection with AI.
Movie Gen Features Raise Ethical Concerns
While these features can be practical and useful tools in the right hands, they also raise important ethical concerns. Because even someone who has no good in mind can create a video from existing footage of a person. And she can then show this in an unpleasant situation that she may never have found herself in in reality. Or at least in which she doesn’t want to be seen in public. We have therefore discussed in more detail in further articles how to protect your own photos and videos from manipulation by AI, how to recognize edited content and what ethical questions artificial intelligence also brings with it.
The appropriate sound is included
Back to the essentials: the features of Movie Gen. In addition to the video, users get the right sound. Also generated by AI or edited by AI. Here, too, you can use existing music or background noise (this is where copyright issues arise), edit it or, for example, extend it. Videos already published by Meta, for example, show nature scenes with authentic-looking sounds tailored to the animals’ movements and appropriate background music.
Movie Gen’s public access is still unclear
Similar to Sora, which made big waves after its announcement and is still not accessible to normal users, the release of Metas Movie Gen will take quite a while. Meta product manager also explained this Chris Cox on Threads and justified this with the currently very expensive and lengthy video generation. The presentation of the current status is more likely to be a sign to the outside world that the company is also taking part in the video AI race. And also want to get feedback on the results so far, which previously only developers and beta testers had access to. We’re excited to see which video AI comes out on top when both Sora and Movie Gen become available to the general public.