To check out Activate 3D for yourself, click here.
This technology was originally developed at Georgia Tech's GVU Center. To see the other research going on at the institute, click here.
Ali, I was watching the spot on Activation3D and came up with an idea. I have a decorator/design business, and having something like Activation 3D provide a figure to interact with when preparing to make a presentation to a 1st time client would be great. I'm sure a lot of businesses would appreciate the practice, so when their in front of the would-be client, they don't say or do anything that would jeopardize the opportunity.
Make sense? I don't know if I clarified to you my thoughts as clearly as I've tried to do.
You asked in your segment on the 3D game if there were any practical applications besides entertainment. Yes there is! Picture the child that is not willing to speak with a social worker, medical professional, or psychologist. Give that child a non-threatening 3D animal, character, or perhaps another child to speak with and the chances are they will open up. How many children speak with their stuffed puppy dog but won't share with their parents? This could be a huge inroad into the health and safety of many children.
I was wondering how this is any more impressive, or even different, then the Kinect camera and games? The kinect does everything this seemed to display and more. Not to mention the fact there was obvious lag in the motions that "virtual dan" was doing.
Kinect does produce character position data from a depth camera. In that sense, it is similar to the SDK and camera that we’re using. What we’ve seen from Kinect thus far is a great step, and we hope to help developers take things even farther. Our goal is always to give people total control of games and virtual worlds.
The innovation of Intelligent Character Motion(ICM) is that it places the character in the virtual world and synthesizes new animations that fit the performance, the world physics, and the performer’s intention. For example, the swing on the bars isn’t just a simple physics simulation of my lower body. ICM is pulling information from a database of human poses to create something that fits into the world and determining whether to use that information for each limb chain in the body allowing the behavior of each hand or foot to differ depending on context.
As for the lag, it’s definitely there. We noticed it during our setup the previous day. It’s from the scan converter feeding into the display. You’ll notice that there’s no lag when they show direct video feed around 3:50. When we get a new frame of video and character position data in the system, we’re synthesizing animation and rendering in a single frame.
Dan Amerson a.k.a. Real Dan
Click here to access transcripts from recent shows.
Get every new post delivered to your Inbox.
Join 21,285 other followers