Publication List


Recent Publications (Last two years)


[pdf]
[bibtex]


Volker Grabe and Stephen T Nuske. Long distance visual ground-based signaling for unmanned aerial vehicles. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016), pages 4976--4983, Daejeon, Korea, October 2016. [url].

Abstract We present a long-range visual signal detection system that is suitable for an unmanned aerial vehicle to find an optical signal released at a desired landing site for the purposes of cargo delivery or rescue situations where radio signals or other communication systems are not available or the wind conditions at the landing site need to be signaled. The challenge here is to have a signal and detection system that works from long range (\textgreater1000m) amongst ground clutter during various seasonal conditions on passive imagery. We use a smoke-grenade as a ground signal, which has the advantageous properties of being easy to carry by ground crews because of its light weight and small size, but when released has a long visual signaling range. We employ a camera system on the UAV with a visual texture feature extraction approach in a machine learning framework to classify image patches as `signal' or `background'. We study conventional approaches and develop a visual feature descriptor that can better differentiate the appearance of the visual signal under varying conditions and, when used to train a random-forest classifier, outperforms commonly used feature descriptors. The system was rigorously and quantitatively evaluated on data collected from a camera mounted on a helicopter and flown towards a plume of signal smoke over a variety of seasons, ground conditions, weather conditions, and environments. ... [read more]


[pdf]
[bibtex]


Zheng Fang, Shichao Yang, Sezal Jain, Geetesh Dubey, Stephan Roth, Silvio Maeta, Stephen Nuske, Yu Zhang, and Sebastian Scherer. Robust autonomous flight in constrained and visually degraded shipboard environments. Journal of Field Robotics, September 2016. [url].

Abstract This paper addresses the problem of autonomous navigation of a micro aerial vehicle (MAV) for inspection and damage assessment inside a constrained shipboard environment, which might be perilous or inaccessible for humans, especially in emergency scenarios. The environment is GPS-denied and visually degraded, containing narrow passageways, doorways, and small objects protruding from the wall. This causes existing two-dimensional LIDAR, vision, or mechanical bumper-based autonomous navigation solutions to fail. To realize autonomous navigation in such challenging environments, we first propose a robust state estimation method that fuses estimates from a real-time odometry estimation algorithm and a particle filtering localization algorithm with other sensor information in a two-layer fusion framework. Then, an online motion-planning algorithm that combines trajectory optimization with a receding horizon control framework is proposed for fast obstacle avoidance. All the computations are done in real time on the onboard computer. We validate the system by running experiments under different environmental conditions in both laboratory and practical shipboard environments. The field experiment results of over 10 runs show that our vehicle can robustly navigate 20-m-long and only 1-m-wide corridors and go through a very narrow doorway (66-cm width, only 4-cm clearance on each side) autonomously even when it is completely dark or ... [read more]


[pdf]
[bibtex]


Ryo Sugiura, Shogo Tsuda, Seiji Tamiya, Atsushi Itoh, Kentaro Nishiwaki, Noriyuki Murakami, Yukinori Shibuya, Masayuki Hirafuji, and Stephen Nuske. Field phenotyping system for the assessment of potato late blight resistance using rgb imagery from an unmanned aerial vehicle. Biosystems Engineering, 148:1 -- 10, August 2016. [url].

Abstract In tests for field resistance of potato - Solanum tuberosum L. - to late blight, crop scientists rate the disease severity exclusively using visual examinations of infections on the leaves. However, this visual assessment is generally time-consuming and quite subjective. The objective of this study was to develop a new estimation technique for disease severity in a field using RGB imagery from an unmanned aerial vehicle (UAV). For the assessment of disease resistance of potatoes a test field was designed that consisted of 262 experimental plots on which various cultivars and lines were planted. From mid-July to mid-August in 2012, conventional visual assessment of disease severity was conducted while 11 aerial images of the field were obtained. The disease severity was estimated using an image processing protocol developed in this study. This estimation method was established so that the error of the severity estimated by image processing was minimal when compared with the visual assessment. Comparing the area under the disease progress curves (AUDPCs) calculated from the visual assessment and time series of images, the coefficient of determination was 0.77. A further experiment was conducted to validate the developed method. Eleven images of a field planted the following year were taken, and the resulting coefficient of determination was 0.73. The breeders concluded that these correlations were acceptable and that the UAV ... [read more]


[pdf]
[bibtex]


Zania Pothen and Stephen Nuske. Automated assessment and mapping of grape quality through image-based color analysis. IFAC-PapersOnLine, 49(16):72--78, August 2016. [url].

Abstract The harvest operation for table-grapes and fresh market horticultural fruits is a large and expensive logistical challenge with the choice of harvest dates and locations playing a crucial role in determining the quality of the yield and in determining the efficiency and productivity gain of the entire operation. The choice of harvest dates and locations, particularly in red varieties, is planned based upon the development of the color of the grape clusters. The traditional process to evaluate the amount of ripe, fully-colored fruit is visual assessment, which is subjective and prone to errors. The number of locations where a grower will evaluate the fruit development is statistically insufficient given the size of commercial vineyards and the variability in the color development. Therefore, an automated approach for evaluating color development is desirable. In this paper, we use a vision-based system to collect images of the fruit zone in a vineyard. We then use color image analysis to grade and predict the color development of grape clusters in the vineyard. Using our approach we are able to generate spatial maps of the vineyard showing the current and predicted distribution of color development. Our imaging measurement system achieves R2 correlation values of 0.42-0.56 against human measurements. We our able to predict the color development to within 5\% average absolute error of the imaging measurements. ... [read more]


[pdf]
[bibtex]


Omeed Mirbod, Luke Yoder, and Stephen Nuske. Automated measurement of berry size in images. IFAC-PapersOnLine, 49(16):79--84, August 2016. [url].

Abstract Knowledge on berry size in grape vineyards can be a great asset for growers to help manage their crop whether for yield assessment or grape quality control. Having the ability to size berries of an entire field would allow growers to effectively monitor their vineyards at various stages of the growing season. Manual methods for determining berry size distribution of an entire field can be time consuming and rely on small sample sets which can lead to inaccuracies. This paper introduces an automated imaging system that measures diameter of grapes for every vine in an entire vineyard and generates a comprehensive map showing berry size variability which until now has not been available to growers. Believed to be the first example of mapping berry size across commercial vineyard blocks, this system uses computer vision techniques to locate and size the berries identifying submillimeter berry diameter differences. Maps of variability in berry size are shown to correlate with canopy size and yield. Diameter estimations are found to measure within 6\% of manual measurements and a strong correlation is seen between estimated berry sizes and actual berry weights with r2 = 0.96.


[pdf]
[bibtex]


Z. Pothen and S. Nuske. Texture-based fruit detection via images using the smooth patterns on the fruit. In Proceedings of the IEEE International Conference on Robotics and Automation, May 2016.

Abstract This paper describes a keypoint detection al- gorithm to accurately detect round fruits in high resolution imagery. The significant challenge associated with round fruits such as grapes and apples is that the surface is smooth and lacks definition and contrasting features, the contours of the fruit may be partially occluded, and the color of the fruit often blends with background foliage. We propose a fruit detection algorithm that utilizes the gradual variation of intensity and gradient orientation on the surface of the fruit. Candidate fruit locations, or ``seed points'' are tested for both monotonically decreasing intensity and gradient orientation profiles. Candidate fruit locations that pass the initial filter are classified using modified histogram of oriented gradients combined with a pairwise intensity comparison texture descriptor and random forest classifier. We analyse the performance of the fruit detection algorithm on image datasets of grapes and apples using human labeled images as ground truth. Our method to detect candidate fruit locations is scale invariant, robust to partial occlusions and more accurate than existing methods. We achieve overall F1 accuracy score of 0.82 for grapes and 0.80 for apples. We demonstrate our method is more accurate than existing methods.


[pdf]
[bibtex]


Stephen Nuske, Sanjiban Choudhury, Sezal Jain, Andrew Chambers, Luke Yoder, Sebastian Scherer, Lyle Chamberlain, Hugh Cover, and Sanjiv Singh. Autonomous exploration and motion planning for a uav navigating rivers. Journal of Field Robotics, June 2015.

Abstract Mapping a river's geometry provides valuable information to help understand the topology and health of an environment and deduce other attributes such as which types of surface vessels could traverse the river. While many rivers can be mapped from satellite imagery, smaller rivers that pass through dense vegetation are occluded. We develop a micro air vehicle (MAV) that operates beneath the tree line, detects and maps the river, and plans paths around three-dimensional (3D) obstacles (such as overhanging tree branches) to navigate rivers purely with onboard sensing, with no GPS and no prior map. We present the two enabling algorithms for exploration and for 3D motion planning. We extract high-level goal-points using a novel exploration algorithm that uses multiple layers of information to maximize the length of the river that is explored during a mission. We also present an efficient modification to the SPARTAN (Sparse Tangential Network) algorithm called SPARTAN-lite, which exploits geodesic properties on smooth manifolds of a tangential surface around obstacles to plan rapidly through free space. Using limited onboard resources, the exploration and planning algorithms together compute trajectories through complex unstructured and unknown terrain, a capability rarely demonstrated by flying vehicles operating over rivers or over ground. We evaluate our approach against commonly employed algorithms and compare guidance ... [read more]


[pdf]
[bibtex]


Zheng Fang, Shichao Yang, Sezal Jain, Geetesh Dubey, Silvio Mano Maeta, Stephan Roth, Sebastian Scherer, Yu Zhang, and Stephen T. Nuske. Robust autonomous flight in constrained and visually degraded environments. In Field and Service Robotics, June 2015.

Abstract This paper addresses the problem of autonomous navigation of a micro aerial vehicle (MAV) inside a constrained shipboard environment for inspection and damage assessment, which might be perilous or inaccessible for humans especially in emergency scenarios. The environment is GPS-denied and visually degraded, containing narrow passageways, doorways and small objects protruding from the wall. This makes existing 2D LIDAR, vision or mechanical bumper-based autonomous navigation solutions fail. To realize autonomous navigation in such challenging environments, we propose a fast and robust state estimation algorithm that fuses estimates from a direct depth odometry method and a Monte Carlo localization algorithm with other sensor information in an EKF framework. Then, an online motion planning algorithm that combines trajectory optimization with receding horizon control framework is proposed for fast obstacle avoidance. All the computations are done in real-time onboard our customized MAV platform. We validate the system by running experiments in different environmental conditions. The results of over 10 runs show that our vehicle robustly navigates 20m long corridors only 1m wide and goes through a very narrow doorway (66cm width, only 4cm clearance on each side) completely autonomously even when it is completely dark or full of light smoke.

Past Publications (Two years ago and earlier)


[pdf]
[bibtex]


Shichao Yang, Zheng Fang, Sezal Jain, Geetesh Dubey, Silvio Mano Maeta, Stephan Roth, Sebastian Scherer, Yu Zhang, and Stephen T. Nuske. High-precision autonomous flight in constrained shipboard environments. Technical Report CMU-RI-TR-15-06, Robotics Institute, Pittsburgh, PA, February 2015.

Abstract This paper addresses the problem of autonomous navigation of a micro aerial vehicle (MAV) inside of a constrained shipboard environment to aid in fire control, which might be perilous or inaccessible for humans. The environment is GPS-denied and visually degraded, containing narrow passageways, doorways and small objects protruding from the wall, which makes existing 2D LIDAR, vision or mechanical bumper-based autonomous navigation solutions fail. To realize autonomous navigation in such challenging environments, we first propose a fast and robust state estimation algorithm that fuses estimates from a direct depth odometry method and a Monte Carlo localization algorithm with other sensor information in a two-level fusion framework. Then, an online motion planning algorithm that combines trajectory optimization with receding horizon control is proposed for fast obstacle avoidance. All the computations are done in real-time onboard our customized MAV platform. We validate the system by running experiments in different environmental conditions. The results of over 10 runs show that our vehicle robustly navigates 20m long corridors only 1m wide and goes through a very narrow doorway (only 4cm clearance on each side) completely autonomously even when it is completely dark or full of light smoke.


[pdf]
[bibtex]


Stephen Nuske, Kyle Wilshusen, Supreeth Achar, Luke Yoder, Srinivasa Narasimhan, and Sanjiv Singh. Automated visual yield estimation in vineyards. Journal of Field Robotics, 31(5):837--860, September 2014. [url].

Abstract We present a vision system that automatically predicts yield in vineyards accurately and with high resolution. Yield estimation traditionally requires tedious hand measurement, which is destructive, sparse in sampling, and inaccurate. Our method is efficient, high-resolution, and it is the first such system evaluated in realistic experimentation over several years and hundreds of vines spread over several acres of different vineyards. Other existing research is limited to small test sets of 10 vines or less, or just isolated grape clusters, with tightly controlled image acquisition and with artificially induced yield distributions. The system incorporates cameras and illumination mounted on a vehicle driving through the vineyard. We process images by exploiting the three prominent visual cues of texture, color, and shape into a strong classifier that detects berries even when they are of similar color to the vine leaves. We introduce methods to maximize the spatial and the overall accuracy of the yield estimates by optimizing the relationship between image measurements and yield. Our experimentation is conducted over four growing seasons in several wine and table-grape vineyards. These are the first such results from experimentation that is sufficiently sized for fair evaluation against true yield variation and real-world imaging conditions from a moving vehicle. Analysis of the results demonstrates yield estimates that ... [read more]


[pdf]
[bibtex]


Andrew D Chambers, Sebastian Scherer, Luke Yoder, Sezal Jain, Stephen T. Nuske, and Sanjiv Singh. Robust multi-sensor fusion for micro aerial vehicle navigation in gps-degraded/denied environments. In In Proceedings of American Control Conference, Portland, OR, June 2014.

Abstract State estimation for Micro Air Vehicles (MAVs) is challenging because sensing instrumentation carried on-board is severely limited by weight and power constraints. In addition, their use close to and inside structures and vegetation means that GPS signals can be degraded or all together absent. Here we present a navigation system suited for use on MAVs that seamlessly fuses any combination of GPS, visual odometry, inertial measurements, and/or barometric pressure. We focus on robustness against real-world conditions and evaluate per- formance in challenging field experiments. Results demonstrate that the proposed approach is effective at providing a consistent state estimate even during multiple sensor failures and can be used for mapping, planning, and control.


[pdf]
[bibtex]


Sezal Jain, Stephen Nuske, Andrew Chambers, Luke Yoder, Hugh Cover, Lyle Chamberlain, Sebastian Scherer, and Sanjiv Singh. Autonomous river exploration. In Proceedings of International Conference on Field and Service Robotics, December 2013.

Abstract Mapping a rivers course and width provides valuable information to help understand the ecology, topology and health of a particular environment. Such maps can also be useful to determine whether specific surface vessels can traverse the rivers. While rivers can be mapped from satellite imagery, the presence of vegetation, sometimes so thick that the canopy completely occludes the river, complicates the process of mapping. Here we propose the use of a micro air vehicle flying under the canopy to create accurate maps of the environment. We study and present a system that can autonomously explore rivers without any prior information, and demonstrate an algorithm that can guide the vehicle based upon local sensors mounted on board the flying vehicle that can perceive the river, bank and obstacles. Our field experiments demonstrate what we believe is the first autonomous exploration of rivers by an autonomous vehicle. We show the 3D maps produced by our system over runs of 100-450 meters in length and compare guidance decisions made by our system to those made by a human piloting a boat carrying our system over multiple kilometers.


[pdf]
[bibtex]


Supreeth Achar, Stephen T. Nuske, and Srinivasa G. Narasimhan. Compensating for motion during direct-global separation. In The IEEE International Conference on Computer Vision (ICCV), pages 1481--1488, December 2013.

Abstract Separating the direct and global components of radiance can aid shape recovery algorithms and can provide useful information about materials in a scene. Practical methods for finding the direct and global components use multiple images captured under varying illumination patterns and require the scene, light source and camera to remain stationary during the image acquisition process. In this paper, we develop a motion compensation method that relaxes this condition and allows direct-global separation to be performed on video sequences of dynamic scenes captured by moving projector-camera systems. Key to our method is being able to register frames in a video sequence to each other in the presence of time varying, high frequency active illumination patterns. We compare our motion compensated method to alternatives such as single shot separation and frame interleaving as well as ground truth. We present results on challenging video sequences that include various types of motions and deformations in scenes that contain complex materials like fabric, skin, leaves and wax.


[pdf]
[bibtex]


S. Arora, S. Jain, S. Scherer, S. Nuske, L. Chamberlain, and S. Singh. Infrastructure-free shipdeck tracking for autonomous landing. In IEEE International Conference on Robotics and Automation, May 2013.

Abstract Shipdeck landing is one of the most challenging tasks for a rotorcraft. Current autonomous rotorcraft use shipdeck mounted transponders to measure the relative pose of the vehicle to the landing pad. This tracking system is not only expensive but renders an unequipped ship unlandable. We address the challenge of tracking shipdeck without additional infrastructure on the deck. We present two methods based on video and lidar that are able to track the shipdeck starting at a considerable distance from the ship. This redundant sensor design enables us to have two independent tracking systems. We show the results of the tracking algorithms in 3 different environments, 1. field testing results on actual helicopter flights, 2. in simulation with a moving shipdeck for lidar based tracking and 3. in laboratory using an occluded and moving scaled model of a landing deck for camera based tracking. The complimentary modalities allow shipdeck tracking under varying conditions.


[pdf]
[bibtex]


J.A. Taylor, S. Nuske, S. Singh, J.S. Hoffman, and T.R. Bates. Temporal evolution of within-season vineyard canopy response from a proximal sensing system. In JohnV. Stafford, editor, Precision agriculture ’13, pages 659--665. Wageningen Academic Publishers, 2013. [url].

Abstract A weekly survey of canopy NDVI with a proximal-mounted canopy sensor was undertaken in a cool-climate juice grape vineyard. Sensing was performed at different positions in the canopy. Sensing around the top-wire led to saturation problems, however sensing in the growing region of the canopy led to consistently non-saturated results throughout the season. With this directed sensing, a spatial pattern in NDVI 2-4 weeks after flowering could be generated that approximated the spatial pattern in NDVI at the end of the season. This is earlier than has been previously reported and may allow for proactive within-season canopy management.


[pdf]
[bibtex]


S. Nuske, K. Gupta, S. Narasihman, and S. Singh. Modeling and calibration visual yield estimates in vineyards. In Proceedings of International Conference on Field and Service Robotics, July 2012.

Abstract Accurate yield estimates are of great value to vineyard growers to make informed management decisions such as crop thinning, shoot thinning, irrigation and nutrient delivery, preparing for harvest and planning for market. Current methods are labor intensive because they involve destructive hand sampling and are practically too sparse to capture spatial variability in large vineyard blocks. Here we re- port on an approach to predict vineyard yield automatically and non-destructively using images collected from vehicles driving along vineyard rows. Computer vision algorithms are applied to detect grape berries in images that have been registered together to generate high-resolution estimates. We propose an underlying model relating image measurements to harvest yield and study practical approaches to calibrate the two. We report on results on datasets of several hundred vines collected both early and in the middle of the growing season. We find that it is possible to estimate yield to within 4\% using calibration data from prior harvest data and 3\% using calibration data from destructive hand samples at the time of imaging.


[pdf]
[bibtex]


Q. Wang, S. Nuske, M. Bergerman, and S. Singh. Automated crop yield estimation for apple orchards. In Proceedings of International Symposium of Experimental Robotics, June 2012.

Abstract Crop yield estimation is an important task in apple orchard management. The current manual sampling-based yield estimation is time-consuming, labor-intensive and inaccurate. To deal with this challenge, we develop and deploy a computer vision system for automated, rapid and accurate yield estimation. The system uses a two-camera stereo rig for image acquisition. It works at nighttime with controlled artificial lighting to reduce the variance of natural illumination. An autonomous orchard vehicle is used as the support platform for automated data collection. The system scans the both sides of each tree row in orchards. A computer vision algorithm is developed to detect and register apples from acquired sequential images, and then generate apple counts as crop yield estimation. We de- ployed the yield estimation system in Washington state in September, 2011. The results show that the developed system works well with both red and green apples in the tall-spindle planting system. The errors of crop yield estimation are -3.2\% for a red apple block with about 480 trees, and 1.2\% for a green apple block with about 670 trees.


[pdf]
[bibtex]


S. Scherer, J. Rehder, S. Achar, H. Cover, A. Chambers, S. Nuske, and S. Singh. River mapping from a flying robot: state estimation, river detection, and obstacle mapping. Autonomous Robots, 32(5), May 2012.

Abstract Accurately mapping the course and vegetation along a river is challenging, since overhanging trees block GPS at ground level and occlude the shore line when viewed from higher altitudes. We present a multimodal perception system for the active exploration and mapping of a river from a small rotorcraft. We describe three key components that use computer vision, laser scanning, inertial sensing and intermittant GPS to estimate the motion of the rotorcraft, detect the river without a prior map, and create a 3D map of the riverine environment. Our hardware and software approach is cognizant of the need to perform multi-kilometer missions below tree level with size, weight and power constraints. We present experimental results along a 2 km loop of river using a surrogate perception payload. Overall we can build an accurate 3D obstacle map and a 2D map of the river course and width from light onboard sensing.


[pdf]
[bibtex]


J. Rehder, K. Gupta, S. Nuske, and S. Singh. Global pose estimation with limited gps and long range visual odometry. In Proceedings of the 2012 IEEE/RSJ International Conference on Robotics and Automation, May 2012.

Abstract Here we present an approach to estimate the global pose of a vehicle in the face of two distinct problems; first, when using stereo visual odometry for relative motion estimation, a lack of features at close range causes a bias in the motion estimate. The other challenge is localizing in the global coordinate frame using very infrequent GPS measurements. Solving these problems we demonstrate a method to estimate and correct for the bias in visual odometry and a sensor fusion algorithm capable of exploiting sparse global measurements. Our graph-based state estimation framework is capable of inferring global orientation using a unified representation of local and global measurements and recovers from inaccurate initial estimates of the state, as intermittently available GPS information may delay the observability of the entire state. We also demonstrate a reduction of the complexity of the problem to achieve real-time throughput. In our experiments, we show in an outdoor dataset with distant features where our bias corrected visual odometry solution makes a five-fold improvement in the accuracy of the estimated translation compared to a standard approach. For a traverse of 2km we demonstrate the capabilities of our graph-based state estimation approach to successfully infer global orientation with as few as 6 GPS measurements and with two-fold improvement in mean position error using the corrected visual odometry.


[pdf]
[bibtex]


S. Nuske, S. Achar, K. Gupta, S. Narasimhan, and S. Singh. Visual yield estimation in vineyards: Experiments with different varietals and calibration procedures. Technical Report CMU-RI-TR-11-39, Robotics Institute, Carnegie Mellon University, USA, December 2011.

Abstract A crucial practice for vineyard managers is to control the amount of fruit hanging on their vines to reach yield and quality goals. Current vine manipulation methods to adjust level of fruit are inaccurate and ineffective because they are often not performed according to quantitative yield information. Even when yield predictions are available they are inaccurate and spatially coarse because the traditional measurement practice is to use labor intensive, destructive, hand measurements that are too sparse to adequately measure spatial variation in yield. We present an approach to predict the vineyard yield automatically and non-destructively with cameras. The approach uses camera images of the vines collected from farm vehicles driving along the vineyard rows. Computer vision algorithms are applied to the images to detect and count the grape berries. Shape and texture cues are used to detect berries even when they are of similar color to the vine leaves. Images are automatically registered together and the vehicle position along the row is tracked to generate high resolution yield predictions. Results are presented from four different vineyards, including wine and table-grape varieties. The harvest yield was collected from 948 individual vines, totaling approximately 2.5km of vines, and used to validate the predictions we generate automatically from the camera images. We present different calibration approaches to convert our ... [read more]


[pdf]
[bibtex]


S. Nuske, S. Achar, T. Bates, S. Narasimhan, and S. Singh. Yield estimation in vineyards by visual grape detection. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 2011.

Abstract The harvest yield in vineyards can vary signifi- cantly from year to year and also spatially within plots due to variations in climate, soil conditions and pests. Fine grained knowledge of crop yields can allow viticulturists to better manage their vineyards. The current industry practice for yield prediction is destructive, expensive and spatially sparse – during the growing season sparse samples are taken and extrapolated to determine overall yield. We present an automated method that uses computer vision to detect and count grape berries. The method could potentially be deployed across large vineyards taking measurements at every vine in a non-destructive manner. Our berry detection uses both shape and visual texture and we can demonstrate detection of green berries against a green leaf background. Berry detections are counted and the eventual harvest yield is predicted. Results are presented for 224 vines (over 450 meters) of two different grape varieties and compared against the actual harvest yield as groundtruth. We calibrate our berry count to yield and find that we can predict yield of individual vineyard rows to within 9.8\% of actual crop weight.


[pdf]
[bibtex]


B. Grocholsky, M. Dille, and S. Nuske. Efficient target geolocation by highly uncertain small air vehicles. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 2011.

Abstract Geolocation of a ground object or target of in- terest from live video is a common task required of small and micro unmanned aerial vehicles (SUAVs and MAVs) in surveillance and rescue applications. However, such vehicles commonly carry low-cost and light-weight sensors providing poor bandwidth and accuracy whose contribution to observa- tions is nonlinear, resulting in poor geolocation performance by standard techniques. This paper proposes the application of an efficient over-parameterized state representation to the problem of geolocation that is able to handle large, time-varying, and non-Gaussian sensor error to produce better geolocation esti- mates than typical approaches and which provides computing and communication benefits in applications such as predictive control and distributed collaboration. We evaluate our filter on real flight data, demonstrating its ability to efficiently produce a solution with tight confidence bounds given highly uncertain data.


[pdf]
[bibtex]


A. Chambers, S Achar, S. Nuske, J Rehder, B. Kitt, L. Chamberlain, J. Haines, S. Scherer, and S. Singh. Perception for a river mapping robot. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 2011.

Abstract Rivers with heavy vegetation are hard to map from the air. Here we consider the task of mapping their course and the vegetation along the shores with the specific intent of determining river width and canopy height. A complication in such riverine environments is that only intermittent GPS may be available depending on the thickness of the surrounding canopy. We present a multimodal perception system to be used for the active exploration and mapping of a river from a small rotorcraft flying a few meters above the water. We describe three key components that use computer vision, laser scanning, and inertial sensing to follow the river without the use of a prior map, estimate motion of the rotorcraft, ensure collision- free operation, and create a three dimensional representation of the riverine environment. While the ability to fly simplifies the navigation problem, it also introduces an additional set of constraints in terms of size, weight and power. Hence, our solutions are cognizant of the need to perform multi-kilometer missions with a small payload. We present experimental results along a 2km loop of river using a surrogate system.


[pdf]
[bibtex]


M. Dille, B. Grocholsky, S. Nuske, M. Moseley, and S. Singh. Air-ground collaborative surveillance with human-portable hardware. In AUVSI's Unmanned Systems North America, August 2011.

Abstract Coordination of unmanned aerial and ground vehicles (UAVs and UGVs) is immensely useful in a variety of surveillance and rescue applications, as the vehicles’ complementary strengths provide operating teams with enhanced mission capabilities. While many of today’s systems require independent control stations, necessitating arduous manual coordination between mul- tiple operators, this paper presents a multi-robot collaboration system, jointly developed by iRobot Corporation and Carnegie Mellon University, which features a unified interface for controlling multiple unmanned ve- hicles. Semi-autonomous subtasks can be directly executed through this interface, including: single-click automatic visual target tracking, way- point sequences, area search, and geo-location of tracked points of inter- est. Demonstrations of these capabilities on widely-deployed commercial unmanned vehicles are presented, including the use of UAVs as a commu- nication relay for multi-kilometer, non-line-of-sight operation of UGVs.


[pdf]
[bibtex]


B. Grocholsky, S. Nuske, M. Aasted, S. Achar, and T. Bates. A camera and laser system for automatic vine balance assessment. In American Society of Agricultural and Biological Engineers (ASABE) Annual International Meeting, July 2011.

Abstract Canopy performance, the balance of crop weight and canopy volume, is a key indicator of value in viticultural production. Timely and dense measurement offer the potential to inform management practices and deliver significant improvement in production efficiency. Traditional measurement practices are labor intensive and provide sparse data that may not reflect vineyard variability. We propose and demonstrate a combination of visual and laser sensing mounted on vineyard machinery that provides dense maps of canopy performance indicators. Current industry practice for measuring grape crop weight involves manually counting clusters on a vine with destructive sampling to find the average weight of a single cluster. This paper presents an alternative utilizing vision and laser sensing. We demonstrate use of machine vision to automatically estimate the weight of the crop growing on a vine. Validation of the algorithm was performed by comparing weight estimates generated by the system to ground truth measurements collected by hand. Machine mounted laser scanners provide direct measurement of canopy shape and volume. Validation of the canopy volume measurement is provided by correlation with manually collected dormant vine pruning weight. Attaching these laser and camera sensors to vineyard machinery will allow crop weight and canopy volume measurements to be collected on a large scale quickly and economically. ... [read more]


[pdf]
[bibtex]


S. Achar, B. Sankaran, S. Nuske, S. Scherer, and S. Singh. Self-supervised segmentation of river scenes. In IEEE International Conference on Robotics and Automation, May 2011.

Abstract Here we consider the problem of automatically segmenting images taken from a boat or low-flying aircraft. Such a capability is important for autonomous river following and mapping. The need for accurate segmentation in a wide variety of riverine environments challenges the state of the art vision-based methods that have been used in more structured environments such as roads and highways. Apart from the lack of structure, the principal difficulty is the large spatial and tem- poral variations in the appearance of water in the presence of nearby vegetation and with reflections from the sky. We propose a self-supervised method to segment images into ‘sky’, ‘river’ and ‘shore’ (vegetation + structures) regions. Our approach uses assumptions about river scene structure to learn appearance models based on features like color, texture and image location which are used to segment the image. We validated our algorithm by testing on four datasets captured under varying conditions on different rivers. Our self-supervised algorithm had higher accuracy rates than a supervised alternative, often significantly more accurate, and does not need to be retrained to work under different conditions.


[pdf]
[bibtex]


S. Nuske, M. Dille, B. Grocholsky, and S. Singh. Representing substantial heading uncertainty for accurate geolocation by small uavs. In American Institute of Aeronautics and Astronautics (AIAA) Guidance, Navigation, and Control Conference, August 2010.

Abstract Geolocation of a ground object of interest from live video is a common task required of small and micro unmanned aerial vehicles in surveillance and rescue applications. The small low-cost sensors these vehicles carry provide low accuracy when mapping an image coordinate to a world location. Frequently, a primary source of such inaccuracy is error in vehicle heading. Filtering methods that inadequately represent the resulting nonlinear uncertainty distributions of geolocation measurements will produce inconsistent and inac- curate estimates. This paper presents a geolocation filter with a discretized solution space that correctly handles sampled nonlinear observations. The filter achieves higher accuracy when compared to alternative linearized methods. Assessment of the improved solution accuracy for stationary objects is provided through flight experiments using a commercial human-portable fixed-wing UAV system.


[pdf]
[bibtex]


P. Borges, R. Zlot, M. Bosse, S. Nuske, and A. Tews. Vision-based localization using an edge map extracted from 3d laser range data. In IEEE International Conference on Robotics and Automation, pages 4902 --4909, May 2010.

Abstract Reliable real-time localization is a key component of autonomous industrial vehicle systems. We consider the problem of using on-board vision to determine a vehicle's pose in a known, but non-static, environment. While feasible technologies exist for vehicle localization, many are not suited for industrial settings where the vehicle must operate dependably both indoors and outdoors and in a range of lighting conditions. We extend the capabilities of an existing vision-based localization system, in a continued effort to improve the robustness, reliability and utility of an automated industrial vehicle system. The vehicle pose is estimated by comparing an edge-filtered version of a video stream to an available 3D edge map of the site. We enhance the previous system by additionally filtering the camera input for straight lines using a Hough transform, observing that the 3D environment map contains only linear features. In addition, we present an automated approach for generating 3D edge maps from laser point clouds, removing the need for manual map surveying and also reducing the time for map generation down from days to minutes. We present extensive localization results in multiple lighting conditions comparing the system with and without the proposed enhancements.


[pdf]
[bibtex]


Stephen Nuske, Jonathan Roberts, and Gordon Wyeth. Robust outdoor visual localization using a three-dimensional-edge map. Journal of Field Robotics, 26:728--756, September 2009.

Abstract Visual localization systems that are practical for autonomous vehicles in outdoor industrial applications must perform reliably in a wide range of conditions. Changing outdoor conditions cause difficulty by drastically altering the information available in the camera images. To confront the problem, we have developed a visual localization system that uses a surveyed three-dimensional (3D)-edge map of permanent structures in the environment. The map has the invariant properties necessary to achieve long-term robust operation. Previous 3D-edge map localization systems usually maintain a single pose hypothesis, making it difficult to initialize without an accurate prior pose estimate and also making them susceptible to misalignment with unmapped edges detected in the camera image. A multihypothesis particle filter is employed here to perform the initialization procedure with significant uncertainty in the vehicle's initial pose. A novel observation function for the particle filter is developed and evaluated against two existing functions. The new function is shown to further improve the abilities of the particle filter to converge given a very coarse estimate of the vehicle's initial pose. An intelligent exposure control algorithm is also developed that improves the quality of the pertinent information in the image. Results gathered over an entire sunny day and also during rainy weather illustrate that the localization system ... [read more]


[pdf]
[bibtex]


Stephen Nuske. Visual Localisation in Dynamic Non-uniform Lighting. PhD thesis, School of Information Technology and Electrical Engineering, University of Queensland, July 2009.

Abstract Dynamic non-uniform lighting conditions, prevalent in many field robot applications, cause drastic changes in the visual information captured by camera images, resulting in major difficulties for mobile robots attempting to localise visually. Most current solutions to the problem rely on extracting visual information from images that is decoupled from the effects of lighting. This is not possible in many situations. Chrominance information is often cited as having some invariance to lighting changes, which is confirmed by experiments in this thesis. However, in the bland application environments investigated, chrominance is not a pertinent metric, indicating that chrominance is not the complete solution to the lighting problem. Descriptions of the intensity gradient are also cited as having robustness to lighting changes. Many descriptions of image-point features are based on the intensity gradient and are commonly used as a basis for visual localisation. However, the non-uniform effects of lighting - shadows and shading - are tangled into the intensity gradient, making these descriptions sensitive to non-uniform lighting changes. Experiments are presented which reveal that image-point features recorded at one time of the day cannot be reliably matched with images captured only one or two hours later, after typical changes in sunlight. It appears that autonomously building visual maps which permit geometric localisation in ... [read more]


[pdf]
[bibtex]


S. Nuske, J. Roberts, D. Prasser, and G. Wyeth. Experiments in visual localisation around underwater structures. In Proceedings of International Conference on Field and Service Robotics, July 2009.

Abstract Localisation of an AUV is challenging and a range of inspection applications require relatively accurate positioning information with respect to submerged structures. We have developed a vision based localisation method that uses a 3D model of the structure to be inspected. The system comprises a monocular vision system, a spotlight and a low-cost IMU. Previous methods that attempt to solve the problem in a similar way try and factor out the effects of lighting. Effects, such as shading on curved surfaces or specular reflections, are heavily dependent on the light direction and are difficult to deal with when using existing techniques. The novelty of our method is that we explicitly model the light source. Results are shown of an implementation on a small AUV in clear water at night.


[pdf]
[bibtex]


J. Roberts, A. Tews, and S Nuske. Redundant sensing for localisation in outdoor industrial environments. In Proceedings of the 6th IARP/IEEE-RAS/EURON Workshop on Technical Challenges for Dependable Robots in Human Environments, May 2008.

Abstract We describe our experiences with automating a large fork-lift type vehicle that operates outdoors and in all weather. In particular, we focus on the use of independent and robust localisation systems for reliable navigation around the worksite. Two localisation systems are briefly described. The first is based on laser range finders and retro-reflective beacons, and the second uses a two camera vision system to estimate the vehicle’s pose relative to a known model of the surrounding buildings. We show the results from an experiment where the 20 tonne experimental vehicle, an autonomous Hot Metal Carrier, was conducting autonomous operations and one of the localisation systems was deliberately made to fail.


[pdf]
[bibtex]


S. Nuske, J. Roberts, and G. Wyeth. Visual localisation in outdoor industrial building environments. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 544--550, May 2008.

Abstract This paper presents a vision-based method of vehicle localisation that has been developed and tested on a large forklift type robotic vehicle which operates in a mainly outdoor industrial setting. The localiser uses a sparse 3D-edge- map of the environment and a particle filter to estimate the pose of the vehicle. The vehicle operates in dynamic and non-uniform outdoor lighting conditions, an issue that is addressed by using knowledge of the scene to intelligently adjust the camera exposure and hence improve the quality of the information in the image. Results from the industrial vehicle are shown and compared to another laser-based localiser which acts as a ground truth. An improved likelihood metric, using per- edge calculation, is presented and has shown to be 40\% more accurate in estimating rotation. Visual localization results from the vehicle driving an arbitrary 1.5 km path during a bright sunny period show an average position error of 0.44 m and rotation error of 0.62deg.


[pdf]
[bibtex]


S. Nuske and M. Yguel. Detecting moving pedestrians and vehicles in fluctuating lighting conditions. In In Proceedings Australasian Conference on Robotics and Automation, Australian Robotics and Automation Association Inc., December 2007.

Abstract Detecting moving pedestrians and vehicles with foreground segmentation algorithms is problematic during fluctuating lighting conditions. Edge-based approaches are more robust to lighting than the conventional intensity-based ones. The issue with edge-based approaches though is segmenting the internal foreground areas. In this work a strategy is developed to detect complete foreground areas. Firstly, edge-extraction is performed at multiple scales which increases the initial area detected. To complete the detection of object areas, edge motion-history-images are introduced. The final segmentation is achieved with a region growing algorithm in the edge-motion-history-image. Examples are shown of the successful extraction of foreground objects through changing lighting conditions.


[pdf]
[bibtex]


S. Nuske, J. Roberts, and G. Wyeth. Extending the dynamic range of robotic vision. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 162 --167, May 2006.

Abstract Conventional cameras have limited dynamic range, and as a result vision-based robots cannot effectively view an environment made up of both sunny outdoor areas and darker indoor areas. This paper presents an approach to extend the effective dynamic range of a camera, achieved by changing the exposure level of the camera in real-time to form a sequence of images which collectively cover a wide range of radiance. Individual control algorithms for each image have been developed to maximize the viewable area across the sequence. Spatial discrepancies between images, caused by the moving robot, are improved by a real-time image registration process. The sequence is then combined by merging color and contour information. By integrating these techniques it becomes possible to operate a vision-based robot in wide radiance range scenes


[pdf]
[bibtex]


D. Ball, G. Wyeth, and S. Nuske. A global vision system for a robot soccer team. In In Proceedings Australasian Conference on Robotics and Automation, Australian Robotics and Automation Association Inc., December 2004.

Abstract This paper describes the real time global vision system for the robot soccer team the RoboRoos. It has a highly optimised pipeline that includes thresholding, segmenting, colour normalising, object recognition and perspective and lens correction. It has a fast ‘paint’ colour calibration system that can calibrate in any face of the YUV or HSI cube. It also autonomously selects both an appropriate camera gain and colour gains robot regions across the field to achieve colour uniformity. Camera geometry calibration is performed automatically from selection of keypoints on the field. The system achieves a position accuracy of better than 15mm over a 4m × 5.5m field, and orientation accuracy to within 1°. It processes 614 × 480 pixels at 60Hz on a 2.0GHz Pentium 4 microprocessor.


[pdf]
[bibtex]


Stephen Nuske. Vision system for the 2004 roboroos. Bachelor of engineering thesis, School of Information Technology and Electrical Engineering, University of Queensland, October 2004.

Abstract This thesis describes the modification of a real time vision system to handle non-uniform and dynamic lighting conditions. This work was carried out in the highly dynamic RoboCup small size robot soccer domain using the University of Queensland’s RoboRoos team. In 2004 there were changes to the rules, providing significant additional difficultly for the vision systems of competing teams. The specific challenges that face the RoboRoos Vision system in 2004 were natural lighting conditions and an increase in the size of the field. The removal of specific field lights creates non-uniform and dynamic lighting conditions. Non-uniform lighting such as shadows cast on the field creates the problem that the same object will appear different at various locations. The effects of non-uniform lighting conditions were reversed by locally colour-normalising the regions which hold potential objects. Natural lighting conditions also produce dynamic lighting, where image quality fluctuates over time. To reverse this effect incoming images are normalised globally to maintain regular luminance levels. The field approximately doubled in size for 2004 which effectively required the image resolution to double making it difficult for the vision system to maintain a high frame rate. The processing pipeline of the vision system was optimised to improve the real-time reliability and speed of the system. ... [read more]