Skip To Content
ADVERTISEMENT

Robots Chart Unknown Territory

A wheeled robot

Researchers at Carnegie Mellon University, USA, have optimized robots’ local and global mapping skills so that they can quickly and accurately build up a picture of tricky environments. [Image: J. Zhang]

Scientists in the United States have shown how robots can exploit data from lidar and other sensors to map out complex, unknown terrains without human help (Sci. Robot., doi: 10.1126/scirobotics.adf0970). The researchers say their new scheme, designed for robots operating individually or in groups, can generate such maps more efficiently and quickly than rival approaches such as those based on sampling.

Mapping unfamiliar territory

Robots are often better suited to navigating hostile environments than humans are. Whether conducting search and rescue at the sites of earthquakes, plane crashes and other disasters or exploring the surface of another planet, robots can provide valuable information about otherwise inaccessible or dangerous places.

In the latest work, Ji Zhang, Chao Cao and colleagues at Carnegie Mellon University investigate a new method for plotting the course of robots through an unknown environment to map it out quickly and accurately. As they point out, this process must be carried out on the fly. Rather than working out the shortest route from a preexisting set of data, the challenge involves responding iteratively to sensing data as they arrive—while accounting for sensor noise and variations in terrain.

Rather than working out the shortest route from a preexisting set of data, the challenge involves responding iteratively to sensing data as they arrive—while accounting for sensor noise and variations in terrain.

The researchers’ solution involves two components. At the local level, a robot creates a high-resolution map of its immediate surroundings to capture as much detail as possible while safely navigating obstacles. On a larger scale, it produces a lower-resolution map that is adequate for overall orientation while reducing demand for onboard computation and communication with other robots. This dual approach involves thinning out each higher-resolution local dataset as the machine moves from one location to another.

The researchers implemented their scheme using two types of robot—a ground vehicle resembling a wheelchair and an aerial vehicle (a drone). Both employed lidar, which creates detailed maps of the environment by bouncing pulses of laser light off nearby surfaces and recording the pulses’ return time to work out the distance to those surfaces. This technology was complemented by a camera and an inertial measurement unit based on microelectromechanical systems, which helped ensure that the robots could successfully navigate their surroundings while mapping them.

Outperforming alternatives

Zhang and colleagues tested the robots in a number of complex environments. One of these was a four-story garage with a ramp connecting the floors, an outdoor patio on the top level and a long corridor at the ground level. The other settings included a large indoor space with numerous rooms connected by hallways, and a combination of internal and external spaces—the latter featuring changes in elevation and potentially confusing trees and bushes.

The researchers compared the performance of their own scheme with those of five other approaches—one based on topology, one on information gain and three based on sampling, which saves on processing power by scanning what are deemed to be representative portions of the surrounding area. To assess the relative performance of these rival techniques, the researchers used a pair of metrics: the average volume explored by a robot every second and the time it typically took the robot to update its course based on the latest sensory information.

Zhang and colleagues established that, in both computer simulation and experiments, their scheme came out on top.

Zhang and colleagues established that, in both computer simulation and experiments, their scheme came out on top. It was typically able to fully map a given terrain, while the other methods often failed to complete the job or took longer to do so and sometimes missed harder-to-reach features.

Limitations and outlook

In a competition run by the US Defense Advanced Research Projects Agency in 2021, the new scheme enabled a group of three ground vehicles operated by researchers at Carnegie Mellon and Oregon State University, USA, to outperform other teams in searching for objects within a maze of underground tunnels—the vehicles mapping out more than 80% of the complex in half an hour (of an hour-long competition).

Despite these successes, however, Zhang and colleagues acknowledge that their system still lacks some capabilities that come naturally to humans. They explain that because the scheme relies purely on mapping a terrain’s geometry, the robots it guides will fail to recognize the significance of certain objects—that a doorway, for example, indicates the existence of additional space to be explored.

Despite these successes, however, Zhang and colleagues acknowledge that their system still lacks some capabilities that come naturally to humans.

The researchers note that these limitations can be reduced by feeding the system relevant semantic information and improving spatial prediction, for which additional sensors, such as cameras with a 360° view, would be required. Beyond that, they are looking at enabling robots to target certain areas first—a useful capability when trying to locate survivors of a disaster, for example.

In addition, the researchers say they are investigating how to combine different types of autonomous vehicle—such as those with wheels, tracks, legs and wings—and are also working out how to better explore crowded areas and other fast-changing environments.

Publish Date: 26 July 2023

Add a Comment

Image for keeping the session alive