Meta shows amazing full-body tracking just with the Quest headset


Photo: dead

The article can only be viewed with JavaScript enabled. Please enable JavaScript in your browser and reload the page.

So far, virtual reality systems have tracked the head and hands. That could soon change: The predictive talent of AI enables realistic full-body tracking and therefore better avatar rendering based solely on sensor data from the headset and controllers.

Meta has already demonstrated that AI is a core technology for VR and AR with Quest hand tracking: a trained neural network with many hours of hand movements that enables robust hand tracking even with the lower-resolution cameras of the Quest headset, which isn’t specifically optimized for tracking Hand.

This is supported by the predictive talent of artificial intelligence: thanks to the prior knowledge gained during training, a few inputs from the real world are sufficient to accurately translate the hands into the virtual world. Full real-time acquisition including VR rendering requires more power.

From hand tracking to body tracking via AI prediction

In a new project, Meta researchers are transferring this principle of hand tracking, that is, the most logical and physically correct simulation of virtual body movements based on real motions through artificial intelligence training with previously collected tracking data, to the whole body. QuestSim can realistically animate a full-body avatar using only sensor data from the headset and two controllers.

The Meta team trained QuestSim AI using artificially generated sensor data. For this, the researchers simulated the movements of a headset and controllers based on eight hours of motion capture clips of 172 people. This way, they didn’t have to capture the headset and body movement controller data from scratch.

The training data for QuestSim AI was artificially generated in a simulation. The green dots indicate the default position of the VR headset and controllers. | Photo: dead

The motion capture segments included 130 minutes of walking, 110 minutes of jogging, 80 minutes of casual conversation with gestures, 90 minutes of chalkboard discussion, and 70 minutes of balancing. Avatar simulation training with reinforcement learning lasted about two days.

after training, QuestSim can recognize a person’s movement based on real headset and controller data. Using AI prediction, QuestSim can even simulate movements of body parts such as legs for which sensor data is not available in real time, but whose simulated movements were part of the synthetic motion capture data set, i.e. learned by the AI. For reasonable movements, the avatar is also subject to the rules of physics simulation.

Logo

A headset alone is enough to believe a full-body avatar

QuestSim works for people of all sizes. However, if the avatar differs from the proportions of the real person, it affects the movement of the avatar. For example, a tall avatar of a short person walking bent over. Researchers still see potential for improvement here.

The Meta research team also shows that the headset sensor data alone, combined with the AI ​​predictions, is sufficient for a believable and physically correct full-body animated avatar.

The AI ​​motion prediction works best for movements that are included in the training data and have a high correlation between upper body and leg movement. For very complex or dynamic movements such as sprints or jumps, the avatar can get out of step or fall. Also, since the avatar is physics-based, it doesn’t support teleportation.

In further work, the Meta researchers want to incorporate more detailed information about the skeleton and body shape into training to improve the diversity of avatars’ movements.


#Meta #shows #amazing #fullbody #tracking #Quest #headset

Leave a Comment

Your email address will not be published.