Home » Technology » Runway releases “Act-One” which produces unmarked facial motion capture within the “Gen-3 Alpha” AI video generation! Available with a paid license

Runway releases “Act-One” which produces unmarked facial motion capture within the “Gen-3 Alpha” AI video generation! Available with a paid license

American tech companiesAI Runway Groupwill be held on October 22 (Tuesday) using AI video generation “Gen-3 AlphaAs a “in-house” function, we have released “Act-One”, an unmarked facial motion capture function that transfers character images from facial motion reference videos. Target users are users of paid plans (standard or higher with monthly or annual billing).

Act-One allows you to faithfully capture the essence of an actor’s performance and translate it to your generation. Where traditional pipelines for facial animation involve complex, multi-step workflows, Act-One works with a single drive video that can be shot on something as simple as a cell phone.

Without the need for motion capture or character rigging, Act-One is able to translate the performance from a single video input across many different character designs and in many different styles.

Act-One is up to the performerFaithfully capture the essence of performance and replace it with video generation AIYou can. Traditional face animation pipelines require complex workflows, but with Act-One, you can get the job done with a single video that can be shot with a simple camera like a smartphone.

With no motion capture or rigging required, Act-One can generate different character designs and different performance styles from a single video input.

One of the strengths of models is to produce cinematic and realistic results over a large number of camera angles and focal lengths. Allowing you to generate emotional performances with a depth of character that was previously impossible opening new avenues for creative expression.

With Act-One, eye lines, micro expressions, pace and delivery are all faithfully represented in the final product created.

One of the strengths of this model is thatCreate and output realistic, film-like images with multiple camera angles and focal lengths.Yes. It is possible to create emotional performances with a depth of character that was not possible with conventional AI video generation, opening new avenues for creative expression.

At Act OneLine of sight, subtle changes in facial expressions, pauses, and speaking styleAll this will be faithfully expressed in the video created.

To use it, simply upload the face performance video you shot, select or upload a reference character image from the presets, and run “Generate.”


  • 1: Upload a face performance video

  • Runway releases “Act-One” which produces unmarked facial motion capture within the “Gen-3 Alpha” AI video generation! Available with a paid license


    2: Specify the reference character image


3: Video created (right)

Act-One requires 10 credits/second in Gen-3 Alpha. The maximum video that can be created is 30 seconds, 1,280 x 768, 24fps. Certain conditions are required and recommended for its use, such as recording performance videos from the front, and not taking pictures of characters in profile or full body to show their performance. These conditions and measures are listed below.

■ Creating with Act One on Gen-3 Alpha
https://help.runwayml.com/hc/en-us/articles/33927968552339-Creating-with-Act-One-on-Gen-3-Alpha

The Standard plan is paid annually for $144 (about 22,000 yen). “Created by Gen-3 Alpha and Act-One” is written.

Information related to CGWORLD

● Video to Video is now available with Runway AI’s video generation “Gen-3 alpha”! Supports consistency control of input and output video for up to 10 seconds

Runway AI releases Video to Video functionality of the “Gen-3 Alpha” video AI generation to the public. Target users are users of paid plans (standard or higher with monthly or annual billing).
https://cgworld.jp/flashnews/202409-Gen3AlphaVideo.html

● Luma AI video generation AI “Dream Machine 1.6″ released! Supports camera movement control, and in v1.5 also supports custom text rendering

Luma AI has released the latest version 1.6 of its AI video generation “Dream Machine” to the public. In both Text-to-Video and Image-to-Video, camera movement instructions can now be performed from text input prompts.
https://cgworld.jp/flashnews/202409-DreamMachine.html

● Adobe will release the AI ​​video generation model “Firefly Video Model” beta later this year! Plans to bring the feature into Premiere Pro, ensuring the design is safe for commercial use

Adobe announced on its blog that it will release a beta version of its AI video generation model “Firefly Video Model” in the second half of 2024, and will release samples generated by the model.
https://cgworld.jp/flashnews/202409-FireflyVideo.html

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.