Since the earliest days of computer generated graphics – CGI, Tron and Max Headroom for example – artists have tried their hand at 3D animation. Applications such as Light wave and 3D Max have been available for over two decades now. Possibly one of the most difficult aspect of 3D animation is building the 3D models to use in your animations, especially when photo realistic animations are your aim. This is about to change with the creation of the KinectFusion, a combination of hardware and software that allows any artist with a reasonably powerful video card and Kinect for Windows to generate true to life 3D models of any objects in the immediate environment, all in real time.
Fusion is actually capturing far more than a single object, it will create a model of the entire scene in the surrounding environment. Moving the Kinect around the scene, capturing many angles, the Fusion system will use the information from every angle combining the data into a single model. The more angles you move the Kinect through the more detail it will collect of the scene. Currently the Fusion system is limited to room sized environments.
Normally the Kinect is a stationary device, watching the world move around it. It is used to being the centre of its universe. Using the processing power of today’s graphics processing units – GPU or pixel pushers – to enable the Kinect Fusion to scan and analyse it’s environment in real time. This in itself is an incredible achievement especially considering the technicians were using hardware that can be found in any reasonably fast gaming PC – NVIDIA cards only at the moment -. Scanning of real world scenes in real time has until know been the territory of large movie studios and animation houses like Weta Studios or Pixar. Now anyone with a fast graphics card and Microsoft’s Kinect for Windows can join in on the fun.
The Fusion team have created an incredibly clever system for accurate real-time mapping of indoor scenes. The process for constructing 3D models is a very slick piece of engineering in itself. The Kinect’s camera’s capture depth data which isn’t directly compatible with the wire frame models used by 3D animators. That’s where the software part of the Fusion system comes in. Using the GPU power of today’s PC’s the depth data is compiled into a 3D model.
The key to the system is a single global surface model of the surrounding environment that is maintained by the Fusion software. The model is used to keep both the Kinects position synchronized and to capture and store any extra depth data captured. The Fusion software compares the incoming depth data to the evolving model to gauge it’s relative position and applies the incoming depth data to the model with very little drift – misalignment of data -.
Currently KinectFusion is only capturing the dimensions – volume – of objects in the surrounding environment, allowing for 3D wire models to be constructed. The surface textures – colours, patterns etc. – must be captured and applied separately. This is fairly standard for 3D animators, these tasks are generally done separately. The Kinect itself can capture this information the Fusion system must be expanded to include this functionality.
The Kinect for Windows will be released February 1st and it is already showing incredible potential, much more than a game controller the Kinect is the start of a new generation of devices bringing animation tools to the masses. Accessible tools allowing new ways of interacting and capturing our world. Waving the 3D Kinect around madly now makes a lot of sense, creating a cheap and easily accessible 3D scanner and model builder.
Source: Microsoft KinectFusion
You must be logged in to post a comment.