Our work on high speed cell sorting was one of three research projects discussed in a “Looking at Machine Vision” article recently published in the IEEE Signal Processing Magazine. This work is building a hardware accelerated system to classify cells as they pass through a microfluidic device. The image processing must handle 10,000+ frames/second, while accurately detecting the type of cell. We originally developed the algorithms based upon work in Prof. Dino DiCarlo’s lab at UCLA. We are now working with the startup Cytovale to develop a commercial system based upon our initial technology.
Janarbek was selected to attend and present a poster at Amazon’s Graduate Research Symposium. The symposium brings together graduate students from around the country to Amazon headquarters in Seattle, WA. Janarbek will present his research on easing the design process for building hardware accelerated applications.
The Kinect is a powerful tool for creating 3D models, and making a low power, real-time version has substantial impacts for robotics and virtual reality. Our recent work in this space used the new OpenCL API from Altera to implement portions of the algorithm on an FPGA. This is an important first step towards a mobile version of the Kinect Fusion 3D mapping algorithm. This research was accepted to the International
Conference on Field-Programmable Technology (FPT) held in Shanghai, China in December. The work was authored by Quentin Gautier, Alexandria Shearer, Janarbek Matai, Dustin Richmond, Pingfan Meng, and Ryan Kastner.
The picture shows the 3D reconstruction of our messy shelves in the student offices at UCSD. The left part shows the “normal” camera input. The center images is the depth map, and the right images is the 3D model. This demo runs on an FPGA.