Today on CNN Newsroom

The latest news and information from around the world. Also connect with CNN through social media. We want to hear from you.
October 27th, 2010
12:38 PM ET

THE BIG I: Super Accurate Video Games

Today's Big I is all about the future of video games. New technology is under development at Georgia Tech that will change the way characters move. Activate 3D gives you total control over your character using just your body movements.

To check out Activate 3D for yourself, click here.

This technology was originally developed at Georgia Tech's GVU Center.  To see the other research going on at the institute, click here.

Filed under: Ali Velshi • Anchors • The Big I
soundoff (4 Responses)
  1. Rita Airitam

    Ali, I was watching the spot on Activation3D and came up with an idea. I have a decorator/design business, and having something like Activation 3D provide a figure to interact with when preparing to make a presentation to a 1st time client would be great. I'm sure a lot of businesses would appreciate the practice, so when their in front of the would-be client, they don't say or do anything that would jeopardize the opportunity.

    Make sense? I don't know if I clarified to you my thoughts as clearly as I've tried to do.


    October 27, 2010 at 1:17 pm |
  2. Dorene McCune

    You asked in your segment on the 3D game if there were any practical applications besides entertainment. Yes there is! Picture the child that is not willing to speak with a social worker, medical professional, or psychologist. Give that child a non-threatening 3D animal, character, or perhaps another child to speak with and the chances are they will open up. How many children speak with their stuffed puppy dog but won't share with their parents? This could be a huge inroad into the health and safety of many children.

    October 27, 2010 at 7:22 pm |
  3. blnkman

    I was wondering how this is any more impressive, or even different, then the Kinect camera and games? The kinect does everything this seemed to display and more. Not to mention the fact there was obvious lag in the motions that "virtual dan" was doing.

    October 29, 2010 at 1:56 pm |
  4. Dan Amerson


    Kinect does produce character position data from a depth camera. In that sense, it is similar to the SDK and camera that we’re using. What we’ve seen from Kinect thus far is a great step, and we hope to help developers take things even farther. Our goal is always to give people total control of games and virtual worlds.

    The innovation of Intelligent Character Motion(ICM) is that it places the character in the virtual world and synthesizes new animations that fit the performance, the world physics, and the performer’s intention. For example, the swing on the bars isn’t just a simple physics simulation of my lower body. ICM is pulling information from a database of human poses to create something that fits into the world and determining whether to use that information for each limb chain in the body allowing the behavior of each hand or foot to differ depending on context.

    As for the lag, it’s definitely there. We noticed it during our setup the previous day. It’s from the scan converter feeding into the display. You’ll notice that there’s no lag when they show direct video feed around 3:50. When we get a new frame of video and character position data in the system, we’re synthesizing animation and rendering in a single frame.

    Dan Amerson a.k.a. Real Dan

    November 1, 2010 at 5:56 pm |