RAPID: Aerial Robots for Remote Autonomous Exploration and Mapping

Specific Objectives

We are interested in exploring the possibility of leveraging an autonomous quadrotor in earthquake-damaged environments through field experiments that focus on cooperative mapping using both ground and aerial robots. Aerial robots offer several advantages over ground robots, including the ability to maneuver through complex three-dimensional (3D) environments and gather data from vantages inaccessible to ground robots.

We consider an earthquake-damaged building with multiple floors that are generally accessible to ground robots. However, various locations in the environment are inaccessible to the ground robots due to debris or clutter. The goal is the generation of 3D maps that capture the layout of the environment and provide insight into the degree of damage inside the building.

Significant Results

In collaboration with the Tohuku University at Japan, we designed an field experiment that highlighted the need for heterogeneity. Ground robots do not have the same payload limitations as quadrotors, and they are therefore able to carry larger sensor payloads, maintain tethered communication links, and operate for longer periods of time. However, quadrotors provide mobility and observational capabilities unavailable to ground robots. Hence, to build a rich 3D representation of the environment, we leveraged the advantages of each platform, and in doing so we mitigated the platform limitations.

During the experiment, we used three different research platforms (Fig. 1). The first platform is a ground robot equipped with an onboard sensing suite that enables the generation of dense 3D maps. The vehicle is teleoperated through the multifloor environment while simultaneously collecting sensor data. After the operators identify locations in the environment that are inaccessible to the ground platform, a second ground platform equipped with an automated helipad is teleoperated to these locations and carries a quadrotor robot equipped with onboard sensing that is able to remotely open and close the helipad and autonomously take off and land from the helipad.

 
Fig 1: The three robots used in the experiments include the Kenaf (a) and Quince (b) tracked ground robots. 
Here we see the Quince transporting the Pelican between discrete sites of interest via the landing pad.

The aerial robot is equipped with a laser scanner and onboard computing for online mapping. It is physically transported by the ground robot to each location of interest, where it autonomously takes off before an operator is able to guide the robot to map or observe these inaccessible regions. Upon completion of the mapping and observation phase, the aerial robot is remotely signaled to autonomously land and close the helipad. The quad-rotor is then guided to the next location of interest via the tele-operated ground robot.

On-site, we realized that in complex environments like the earthquake-damaged buildings in Sendai, the appearance of the environment drastically changed from its original structure. Our earlier aerial navigation approach that utilized certain assumptions that are specific to man-made indoor environments would have failed and hence we designed a new quadrotor platform equipped with an IMU, laser scanner, stereo cameras, pressure altimeter, magnetometer and GPS receiver (Fig. 2). The motivation was to utilize the information from multiple sensors such that even if a subset of the sensors were to fail, the performance of the overall system would not be seriously compromised.

 
Fig. 2: Our 1.9 kg MAV platform equipped with an IMU, laser scanner, stereo cameras, pressure altimeter, magnetometer, and GPS receiver. 
All the computation is performed onboard on an Intel NUC computer with 3rd generation i3 processor.

We proposed a novel modular and extensible approach to integrate noisy measurements from multiple heterogeneous sensors that yield either absolute or relative observations at different and varying time intervals, and to provide smooth and globally consistent estimates of position in real time for autonomous flight. Through large-scale indoor and outdoor autonomous flight experiments, we demonstrated that the fusion of measurements from multiple sensors increases the system robustness.

Key Outcomes

In the collaborative multi-floor mapping environment, we successfully deployed our ground and aerial robot platforms. We generated maps from both aerial and ground vehicles. Post processing was performed to merge multiple partial maps and create a complete representation of the environment (Fig. 3).

 
Fig. 3: The 3D voxel grid maps generated during the experiment. 
The map resulting from the Kenaf sensor data is shown on the left, the merged maps resulting from both the Kenaf and Pelican sensor data are shown on the right.
 
Fig. 4: Images from the onboard camera (Figs. 4(a)- 4(d)) and an external camera (Figs. 4(e)- 4(h)). 
Note the vast variety of environments, including open space, trees, complex building structures, and indoor environments. We highlight the position of the MAV with a red circle.

For our new multi-sensor quadrotor platform, we did challenging testing in an industrial complex. The testing site spans across a variety of environments, including outdoor open spaces, densely filled trees, cluttered building areas, and indoor environments (Fig. 4). The MAV is autonomously controlled using the onboard state estimates. The total flight time was approximately 8 minutes, and the vehicle traveled 445 meters with an average speed of 1.5 m/s. As shown in the map-aligned trajectory (Fig. 5), during the experiment, frequent sensor failures occurred (Fig. 6), indicating the necessity of multi-sensor fusion. Fig. 7 shows the evolution of covariance as the vehicle flies through a GPS shadowing area. The global x, y and yaw error is bounded by GPS measurement, without which the error would have grown unboundedly. This matches the observability analysis results. It should be noted that the error on body frame velocity does not grow, regardless of the availability of GPS.

 
Fig. 5: Vehicle trajectory aligned with satellite imagery. Different colors indicate different combinations of sensing modalities. G=GPS, V=Vision, and L=Laser.
 
Fig. 6: Sensor availability over time. Note that failures occurred to all sensors.
This shows that multi-sensor fusion is a must for this kind of indoor-outdoor missions.
 
Fig. 7: Covariance changes as the vehicle flies through a dense building area (between 200s – 300s, top of Fig. 5, green line). 
The GPS comes in and out due to building shadowing. 
The covariance of x, y, and yaw increases as GPS fails and decreases as GPS resumes. 
Note that the body frame velocity are observable regardless of GPS measurements, and thus its covariance remains small.
The spike in the velocity covariance is due to the vehicle directly facing the sun. 
The X-Y covariance is calculated from the Frobenius norm of the covariance submatrix.

Plans for next year

We are planning to utilize our new multi-sensor quadrotor platform for robust navigation, mapping, and inspection in a variety of indoor and outdoor environments, including tunnel-like environments, densely cluttered industrial buildings, and other critical infrastructures. This opens up applications in a wider domain, which includes inspection both interior and exterior cracks for a dam.

Project Personnel

PD / PI

  • Prof. Vijay Kumar – University of Pennsylvania, Pennsylvania, USA
  • Prof. Nathan Michael (Co PD / PI) – Carnegie Mellon University, Pittsburgh, USA

Graduate Students

  • Shaojie Shen
  • Yash Mulgaonkar
  • Kartik Mohta
  • Tolga Ozaslan

Undergraduate Students

  • Noah Frick

Collaborators

  • Prof. Satoshi Tadokoro – Tohoku University, Sendai, Japan

Publications

  1. S. Shen, Y. Mulgaonkar, N. Michael, and V. Kumar, “Multi-Sensor Fusion for Robust Autonomous Flight in Indoor and Outdoor Environments with a Rotorcraft MAV,” in Proc. of the IEEE Intl. Conf. on Robot. and Autom., 2014. Submitted.
  2. N. Michael, S. Shen, K. Mohta, Y. Mulgaonkar, V. Kumar, K. Nagatani, Y. Okada, S. Kiribayashi, K. Otake, K. Yoshida, K. Ohno, E. Takeuchi, and S. Tadokoro, “Collaborative mapping of an earthquake-damaged building via ground and aerial robots,” J. Field Robotics, vol. 29, no. 5, pp. 832–841, 2012. PDF , Poster , Video
  3. N. Michael, S. Shen, K. Mohta, V. Kumar, K. Nagatani, Y. Okada, S. Kiribayashi, K. Otake, K. Yoshida, K. Ohno, E. Takeuchi, and S. Tadokoro, “Collaborative mapping of an earthquake-damaged building via ground and aerial robots,” in Proc. of the Intl. Conf. on Field and Service Robot., Miyagi, Japan, July 2012.
Top