My research interests are with the design of computer vision systems practical for field robotic applications. Computer vision, although a very promising perceptual tool, has been under-utilized in field robotics because of the difficulties creating reliable systems. It is the harsh nature of the environments that is especially challenging for computer vision systems. There are field robotic applications where computer vision is the only feasible perceptual mechanism due to its small size, low weight, long working range, and the rich nature of the information it provides. Therefore, it is essential to keep researching ways to improve the performance of computer vision systems under challenging conditions in order to further progress the development of field robotics.

Below are some links, videos and summaries of some of my research.

Autonomous Vineyard and Orchard Yield Estimation
Project Website
Vineyard and orchard managers want to know the state of their vines -- both the size of the vine canopy and the predicted harvest yield. Such information can be used to manage vine vegetative and reproductive growth to improve the efficiency of vineyard operations. Traditional industry practices for gathering crop and canopy estimates are labor-intensive, expensive, destructive, imprecise, spatially coarse and do not scale to large vineyards. The research project aims to design and demonstrate new sensor technologies for autonomously gathering crop and canopy size estimates from a vineyard -- expediently, precisely, accurately and at high-resolution -- with the goal to improve vineyard efficiency by enabling producers to measure and manage the principal components of grapevine production on an individual vine basis.
Tracking for Helicopter Landing
Project Website
This research is to to detect landing sites for helicopters in real-time from onboard cameras.
River Mapping
Project Website
This project is developing technology to map riverine environments from a low-flying rotorcraft. Challenges include dealing with varying appearance of the river and surrounding canopy, intermittent GPS and a highly constrained payload. We are developing self-supervised algorithms that can segment images from onboard cameras to determine the course of the river ahead, and we are developing devices and methods capable of mapping the shoreline.
Generative 3D Surface and Lighting Models
A novel localization technique that explicitly incorporates a light model and demonstrated the ability to localize an autonomous submarine to navigate underwater structures. The underwater structures such as oil-rigs and pipelines are curved and have few distinguishing features in terms of visual appearance. Further, due to the onboard lighting, the appearance changed drastically based on the relative pose of the vehicle. The system uses a light model to render realistic synthetic images of the environment to compare with the real camera images. Visual localization systems rarely have used a light model to predict the appearance of the scene. The system developed fully respects and accounts for appearance changes to enable successful localization in conditions that are problematic for conventional methods that try to factor-out the lighting conditions.