Home » Technology » Meta uses artificial intelligence and other technologies to allow the virtual avatar you represent to have both feet inside the lower body: mashdigi technology, new products, anecdotes, trends

Meta uses artificial intelligence and other technologies to allow the virtual avatar you represent to have both feet inside the lower body: mashdigi technology, new products, anecdotes, trends

For the design of avatars that represent individuals in virtual reality, Meta said the next step is to use artificial intelligence to make the avatars more realistic and more natural in interaction.

-

Some of the reasons for the poor interactive experience in virtual reality are that the avatar representing the user is too rigid and it is difficult to truthfully express the user’s real interactive emotions.Most virtual portraits in the environment have no feet and lower body parts.

At the Meta Connect 2022 event, Meta CEO Mark Zuckberg further explained that the future virtual portrait design will be combined with a more delicate expression presentation – for example, through Quest’s facial expression recognition detection feature. Pro, virtual reality The virtual portrait in the environment can completely reproduce the expressions presented by the user in the real world, and then express emotions through the expressions of the virtual portrait.

--

On the other hand, just like artificial intelligence, deep learning, etc., the arm movements of virtual portraits can become more natural, Meta also uses the same design logic to allow virtual portraits to have feet and even show lower body movements naturally and smoothly, so virtual characters can further express their emotions through body movements or generate different interactions.

As for this design, it will be the first to be applied to the Horizon Worlds platform in 2023, but in the future it will further expand the scope of virtual portrait applications “Clothing” provided by the real world.

--

To make the interaction in virtual reality more realistic, Meta has already started to test the different angles and facial expressions of the individual by taking a selfie with a mobile phone, and the system can perform deep learning, so that it can interact with a more realistic expression in virtual reality. .

The same technology can also be used to scan real-life objects and then quickly build a model of the captured object in a virtual reality environment, which will be much more convenient for building a more realistic virtual reality environment.

--

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.