Explore our R&D with the world’s most dynamic humanoid robotRead More
Discover the past innovations that informed our current productsRead More
Meet the team behind the innovationsRead More
Learn how we develop and deploy robots to tackle industry’s toughest challengesRead More
Start your journey at Boston DynamicsRead More
Stay up to date with what we’ve been working onRead More
Discover the principles that guide our work and policiesRead More
Trends in Robotics
Blogs •
As mobile robots become more common place, they need advanced sensors and vision systems to safely and effectively understand their surroundings, detect obstacles, and interact with people and objects in meaningful ways.
As mobile robots become more integrated into industries such as manufacturing, logistics, energy, and public safety, their ability to perceive the world around them is critical. A robot vision system uses advanced sensors to help robots safely and effectively navigate dynamic environments—to understand their surroundings, detect obstacles, and interact with both people and objects in meaningful ways.
But how do robots see? Unlike people, robots rely on a mix of robot vision technology including lidar, visual cameras, and other sensors feeding data to software systems to process, analyze, and react. These perception capabilities are foundational to building robots that are agile and adaptable enough for real-world work.
Let’s explore the core sensor technologies that make robotic perception possible.
Much like people, robots’ ability to explore their surroundings frequently relies on visual data. Cameras allow robots to see the world around them and allow operators to easily interpret what the robots are seeing. Visual cameras also offer many form factors which can enable robots to perceive the world around them in different ways. Other than standard visual cameras, robots can use specialized cameras such as PTZ cameras and stereo cameras.
Lidar (Light Detection and Ranging) is a form of sensing which uses lasers to detect features of the environment, emitting pulses of near infrared light and measuring the returning pulse. Using this data is used to generate 3D maps, providing an in-depth awareness of the robots surroundings.
Lidar is commonly used for autonomous navigation, expanding robots’ perception beyond what can be easily detected with visual cameras. This can be particularly useful in feature deserts or low light conditions, for example enabling Stretch® to recognize the walls of a dark shipping container and avoid collisions. But lidar data can also be used to help robots interact with the objects in their environment more reliably and safely.
Laser scanning is also a powerful reality capture tool. Specialized laser scanning payloads use lidar to create detailed point clouds, processing the data into complete digital twins. This 3D data supports factory design, equipment installation, and change management, bringing together the virtual and physical worlds.
While visual cameras capture what’s visible to the human eye, other sensors allow robots to visualize otherwise invisible information.
Thermal cameras or IR cameras detect infrared radiation and convert it into thermal images, visualizing temperature differences on surfaces. Thermal cameras are an increasingly important tool in advanced robot vision systems, especially in industrial inspections, maintenance, and search and rescue operations.
Similarly acoustic imagers detect ultrasonic noise, using a combination of microphones and cameras. These sensor arrays work to precisely pinpoint the source of a particular sound and overlay a picture with the pressure, frequency, and location of the anomalous sound
Robots equipped with thermal and acoustic vision systems can identify overheating equipment, air leaks, faulty bearings, and other thermal anomalies. The Spot Cam+IR payload exemplifies the practical application of this technology—it combines a PTZ camera, a 360 camera, and thermal camera in a single package, giving operators comprehensive data capture and situational awareness in real time.
Today’s robot vision systems aren’t limited to a single type of sensor. Instead, platforms like Spot combine lidar, stereo and PTZ cameras, thermal imaging, or other sensors into a dynamic sensing solution. The data captured is processed with computer vision systems and AI models to plan robot action and respond intelligently to the environment. This hybrid approach enhances safety, efficiency, and autonomy across industries—and empowers robots to complete useful work in complex and unpredictable environments.
As robotic vision technology evolves, robots will gain even deeper environmental awareness, enabling more intelligent and flexible behavior in the real world. Whether mapping an underground tunnel, inspecting a pipeline, patrolling for safety hazards, or detecting objects to manipulate, these advanced sensing systems are transforming how robots interact with the world around them.
Recent Blogs
•10 min read
Spot at Cargill
•8 min read
Making Atlas See the World
•4 min watch
Spot Watches Its Step
Have a question about our robots and applications? We're here to help. Reach out to our sales team or talk to Bobbi, our virtual agent, for fast answers to all your questions.