Tuesday, May 17, 2011

Move Over Microsoft Avatars, Time For Surrogates?

Conceivably Tech published an article recently that exposes a patent from Microsoft filed on January 28, 2011. It seems as though the new software patent would bring object recognition and a real-time replica of the user via body scan, rather than the family-friendly Avatar to the table for gaming. What's not known is whether the patent is intended for the current version of Kinect or a future version. The abstract reads:
“A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan.”
Article from Conceivably Tech and more petent images after the break


From Conceivably Tech:

"The background description of the patent filing refers to the overall Microsoft claim that natural movements are easier to apply by users rather than having to learn the features of a game controller. However, there is one significant difference. This particular patent does not describe a user building an avatar to be represented on the screen. It describes a technology that actually scans a gamer’s body to automatically create an avatar – which we would then actually call a surrogate, if we take a cue from the 2009 movie Surrogates.



It is especially noteworthy that the patent discusses a virtual body that matches the actual body in certain criteria: Claim 20 of the patent states: “The [user model rendering system] wherein the first processor determines the human target associated with the user in the depth image by flood filling each target in the scene and comparing each flood filled target with a pattern of a body model of a human to determine whether each flood filled target matches the pattern of the body model of the human.”


The system is also capable of recognizing objects the actual user may be using during the game process: “In such embodiments, the user of an electronic game may be holding the object such that the motions of the player and the object may be used to adjust and/or control parameters of the game. For example, the motion of a player holding a racket may be tracked and utilized for controlling an on-screen racket in an electronic sports game. In another example embodiment, the motion of a player holding an object may be tracked and utilized for controlling an on-screen weapon in an electronic combat game.”


Recognizing a device is not that revolutionary by itself, but imagine what it could do for Kinect: The camera could finally recognize the exact location and direction of a device, similar to what we can do with Sony’s PS3 Move controllers and the result would be much greater control of sports games, for example. Also, imagine all the branding opportunities, if you could hold a very specific tennis racket, instead of a general model. In future, your teenagers may want not just a gun to play a video game. They may want a very specific model. Imagine the all the additional sales video game developers could achieve.

Much of the player rendering appears to be about flood-filling virtual bodies, but also body shapes (which most of would want to still modify in game environments anyway): “In another embodiment, to determine the location of the shoulders, the bitmask may be parsed downward a certain distance from the head. For example, the top of the bitmask that may be associated with the top of the head may have an X value associated therewith. A stored value associated with the typical distance from the top of the head to the top of the shoulders of a human body may then added to the X value of the top of the head to determine the X value of the shoulders.”

The result? We are clearly on a path to project ourselves into virtual environments, beyond avatars and beyond actual avatars that we create today as Miis, predefined players or Kinect avatars that allow us to resemble the look we desire in a cartoonish way. A next-generation Kinect and much more powerful sensors, cameras, processors and graphics engines could transport the quest for ultimate reality in video games – a quest we have followed in video games with artificial and imaginary characters over the past two decades. In the not too distant future, you may be able to see yourself on the video screen, exploring and acting in a virtual world. You could call yourself a surrogate, then living in the Matrix. Scary? Possibly. But exciting nevertheless."

No comments:

Post a Comment