CONTACT US
Back to all articles

Why I Don’t Want Point Clouds on Our New Website

Author photo of a USS Growler sonar display taken at the Intrepid Sea, Air & Space Museum, NYC.

Imagine you are a technical person in the UK, just before the outbreak of WWII, about to demonstrate a new defensive weapon to some very important people – maybe the Prime Minister – for the first time.

Your audience assembles. You point to a blank monitor and ask your audience to watch.

Without any warning, a green blob appears at the edge of the monitor.

The blob disappears. It reappears. It disappears again. It reappears in a different place, then proceeds to move off the other edge of the monitor.

Your demonstration worked perfectly. You can barely contain your excitement. “Did you see that? Did you? That is how we are going to save our country from utter ruin from enemy aircraft!”

Is your audience impressed? If not, can you blame them? How is a green blob on a phosphor monitor going to accomplish anything, much less stop a real enemy with real planes and bombs?

Radar was arguably the most critical technical advantage for the Allies in WWII. Without radar, without the capability for a small outmatched air force to efficiently thwart a larger and better equipped one, the Royal Air Force would have succumbed to the Luftwaffe in the Battle of Britain, and Germany would have invaded the United Kingdom. Allied victories like D-Day and breaking the Enigma Machine with a Turing Machine would have all been moot if the UK had been lost in 1941.

All that from what looks like a silly green blob on a screen.

I spent hours in an earlier career looking at screens filled with little blobs. My blobs were the digital output of LiDAR systems, not radar, but the colored-blob-filled screens looked similarly silly if you didn’t understand what you were seeing.

Cranking up the resolution on those old LiDAR systems could generate a rainbow-colored grainy image that to the human eye looked cool but produced nothing as clear as a photograph.

Personal photo of LiDAR testing for Helicopter Laser Radar program, circa 1993. Horizontal purple lines are power cables more than 600 meters away.

So what good were the colorful blobs? How do blobs on monitors change the world?

They don’t – but the information that the blobs represent can.

To assess the value of the blobs you have to understand the systems that are going to use the blobs.

Those systems do not perceive and interpret the blobs visually like you do when you look at a monitor. How crude they look to you is irrelevant to their value for automation.

The blob on the radar monitor in my WWII scenario showed the location and heading of an incoming bomber hundreds of miles away in a volume of tens of thousands of cubic miles. In the darkness of night, obscured by clouds.

Without radar, reliably finding that bomber would have been essentially impossible. You could fly hundreds of spotter planes or fighters all day and night, burning up limited aviation fuel, with a small chance of accomplishing anything useful. Without radar, that enemy bomber is reaching and destroying its target every time it takes off.

With radar, locating that bomber and coordinating a defense becomes a manageable logistics problem. Lives are saved.

While the value of radar might not have been obvious to everyone the first time a green blob appeared on a screen, radar was an invaluable tool for finding enemy planes, ships and submarines in low light conditions.

LiDAR is similar to radar in concept, except LiDAR uses lasers with much shorter wavelengths than radar’s radio waves, resulting in LiDAR “blobs” that are much smaller than radar “blobs”.

In other words, spatial resolution and range precision from LiDAR is typically better than from radar. LiDAR lets you position everyday objects in everyday scenes with incredible accuracy.

LiDAR output – a collection of tiny blobs or points in three-dimensional space – is called a “point cloud”. A typical point cloud from a commercial LiDAR consists of hundreds of thousands to the low millions of blobs collected each second. Unlike the monochromatic ‘scope of WWII, which showed the presence or absence of something at a specific bearing and distance, modern LiDAR looks a bit like a camera image, except that the colors represent something the LiDAR measures, like distance or velocity, reflectivity or “shininess to LiDAR’s laser illumination”, or other measured qualities. A particular color in a point cloud represents a measured value at the precise location in space where that value was measured.

LiDAR point clouds look better than a single green blob on a ’scope, and they have some nifty display qualities not realizable in WWII without modern computers. Unlike a digital photo, which is a two- dimensional array of reflected colors, you can look at a LiDAR point cloud from any angle, from any position, or zoom in to any part of the image. Point clouds are 3D data, represented on a 2D display. You can rotate them, flip them, even fly through them looking around….

Still, point clouds do not look as good as what we see with our eyes. Even high resolution LiDARs offer a small fraction of the resolution of your smartphone camera.

So, what is the point of a point cloud?

As with the WWII radar ‘scope, there is a technical narrative required to understand the value of a LiDAR point cloud. You are not the audience for a LiDAR point cloud. Don’t take it personally. Point clouds are not meant for human consumption.

The data we see in a point cloud is meant for machine perception software. LiDAR data translates complex environments into simple geometry problems. Software consumes and interprets point clouds in ways that your eyes and brain cannot easily replicate.

If you want to coordinate machines on a crowded street where you can’t control the lighting or the cars, LiDAR is better than radar. Anywhere an application needs to monitor or deconstruct a complex environment, in complex or uncontrolled lighting, LiDAR is the best tool we have. Whenever you need to measure the exact position or speed of an object, whether it is a person, a car, or a bridge, LiDAR is your modality of choice. Better than visible cameras. Better than radar.

Despite the relative crudeness of a point cloud, LiDAR will be the enabling technology for machine perception for the next decade.

In the early 90’s I helped to build a LiDAR system to detect hazards to low-flying helicopters, hazards that were difficult to spot during the day and completely invisible at night, no matter how you cranked up the gain on your night vision goggles.

Power lines are one example. Power lines are too thin to reflect much ambient light and they don’t hold enough heat to be seen with infrared. You really don’t want to hit a power line if you are in a helicopter. This is why you see giant orange or white spheres mounted on power lines wherever helicopters might be traveling nearby.

If I showed you the point clouds I was working with back then, you would have a hard time spotting the power lines above a tree canopy. To reliably find power lines in complex scenes from low-res LiDAR scans I did what every engineer does when pressed: I cheated.

Tree canopies are gorgeous, even in a lo-res LiDAR scan, but they don’t have a lot of straight lines in them. Any straight lines you see in a tree canopy are, guess what? …likely to be some kind of man- made object! Wires might be very hard to see in a point cloud with your eye, but very easy when you use a computer to find points in a line parallel to the ground. That just takes a little math. If you scan slowly from a fast-moving platform, it takes a lot of math in a short amount of time! (Thank goodness for the good old i860 chip…)

If you get everything right, a bunch of what were originally random dots will clearly outline an invisible flight hazard. The idea is easy to explain, but I am glossing over many of the exquisite problems that my team and I solved back in the day… Solving these problems was very important to us. QA takes on a whole new meaning when you are testing the system in the actual helicopter!

Machine perception systems use relatively fast, low-res data sets. They might look meaningless to us, but to the perception software that will someday drive your car, enable robots and people to work safely together in factories, or monitor bridges and construction sites, a small cluster of a few multidimensional points – just a jumble of seemingly randomly colored pixels – clearly identifies people, cars, buildings, and more.

At the end of the day, precision, reliability, range, and cost are much more useful parameters than hi- resolution output from cameras that machine perception cannot use. How silly the blobs look to us is irrelevant. Don’t forget: you are not the end-user of a point-cloud. Your computer is.

Author and potential investor during a recent demonstration

Peter Stern

AUTHOR

Share This Post

Machine perception is within reach

Our APIs get you the most from our sensors without getting stuck in the photonic weeds. Leave the details of LiDAR to us, so you can focus on your application.

CONTACT US