The Kinect is a powerful tool for creating 3D models, and making a low power, real-time version has substantial impacts for robotics and virtual reality. Our recent work in this space used the new OpenCL API from Altera to implement portions of the algorithm on an FPGA. This is an important first step towards a mobile version of the Kinect Fusion 3D mapping algorithm. This research was accepted to the International
Conference on Field-Programmable Technology (FPT) held in Shanghai, China in December. The work was authored by Quentin Gautier, Alexandria Shearer, Janarbek Matai, Dustin Richmond, Pingfan Meng, and Ryan Kastner.
The picture shows the 3D reconstruction of our messy shelves in the student offices at UCSD. The left part shows the “normal” camera input. The center images is the depth map, and the right images is the 3D model. This demo runs on an FPGA.