I'm close to finishing my PhD at the UCLA Computer Vision Lab. My research focus is robust sensor fusion for localization and navigation. Here you'll find some information related to my research. Other projects live elsewhere, such as here.
Well, we didn't win, but we drove really fast (47 MPH top speed/completed 22 miles in just under one hour). Our garage space at the NQE was next door to Stanford. Despite the fact that we really should have won ;), I must say that Sebastian and his whole team did a great job, while staying really cool and humble. Congratulations to them. I just gave a talk on sensor fusion at the IEEE/ION Position, Location, and Navigation Symposium which went really well (paper). Yay! I was on NPR! :) Marketplace. Also: UCLA Magazine, Daily Bruin.
I was a member of the Golem Group for the DARPA Grand Challenge which took place in March, 2004. I was sitting in the driver's seat of the vehicle when we finally got the thing working at four or five in the morning, which was an incredible rush. When we set it loose on the course, Golem 1 managed to drive 5.2 miles autonomously before stopping, blowing away all expectations and garnering lots of media attention. My picture was in Hot Truck Magazine.
My paper for ICCV '05 was accepted as a poster. Active Appearance Models combine shape and texture information from an image by warping the image to a "shape-free" representation (and remembering the warps necessary to get to this canonical configuration, then applying PCA to the shape-free image data. This does a great job for images of faces, for example, where the warps (the differences in shape between different examples) are relatively small and well-behaved, and there don't tend to be missing features in many examples (few people are missing an eye). This paper presents a new framework for modeling types of images like cars, where the variations are greater and not all features are always present. Images are modeled with layers (like cardboard cutouts stacked in front of each other), and PCA with missing data is applied. You can read the paper if you would like to know more.
Work done with Gianfranco Doretto. We extended his Dynamic Texture Model to model both the spatial and temporal properties of textures that exhibit stationary behavior in both domains. More information can be found at Gianfranco's page, or you can just read the paper, which appeared (as an oral presentation) at the 2004 European Conference on Computer Vision in Prague.
When I first came to UCLA for my PhD, I explored the field of bioinformatics. I worked at the Laboratory of Neuro Imaging. While there, I created an extensible medical image visualization tool. It has kind of a cool plugin architecture, which allows for various editing, automatic image processing, and network-synchronized stuff to be done. It's now called SHIVA, though I liked my orignal name (BrainBuddy) better. While at LONI, I also did some work with Nancy Sicotte, developing software to semi-automatically detect MS lesions.