I am excited about research which advances the perception and control of mobile robotics. In particular, I am currently working on leveraging geometry for unsupervised learning, reasoning under uncertainty with Bayesian deep learning and developing end-to-end systems which can reason from perception to control. My research has been used to power smart-city infrastructure with Vivacity, control self-driving cars with Toyota Research Institute and enable next-generation drone flight with Skydio.
Scene understanding is a fundamental task in computer vision which requires understanding the scene’s geometry and semantic structure. Initially, I worked on a semantic segmentation algorithm called SegNet. More recently, I have been interested in learning depth, instance and semantic segmentation from a unified deep learning architecture.
Bayesian Deep Learning
Deep learning is great for achieving state-of-the-art results, however these models cannot understand what they don’t know. Bayesian deep learning (BDL) is a very exciting framework for understanding our model’s uncertainty. This paper is an introduction to Bayesian deep learning for computer vision. I have also found BDL useful for localisation and scene understanding.
PoseNet is an algorithm for relocalisation - estimating the position and orientation of the camera from an image within a previously explored area. It works over large outdoor urban environments or inside buildings. It takes only 5ms to do this from a single colour image, here is a demo.
Some more details of other projects, including an autonomous drone and augmented reality, can be found here.