UC Davis has an effort to better visualize and analyze LiDAR Point Clouds that’s using some rather unorthodox approaches.

The challenge in visualizing and analyzing tripod LiDAR data is that data sets can contain hundreds of millions to billions of unstructured (scattered) 3D points, each with their (x, y, z) position and an associated intensity or color value. Although the sample points sample surfaces in the surveyed area, the data does not contain any relationships between points — the underlying surfaces have to be reconstructed from the point data alone. Our work focuses on developing software to visualize the “raw” LiDAR data as a cloud of 3D points with intensity or color values. We use an out-of-core multiresolution approach to visualize LiDAR data that is too big to fit into the computer’s main memory, at interactive frame rates of around 60 frames per second. Our software also contains tools to analyze LiDAR data, for example, an interactive selection tool to mark subsets of points defining a single feature, and algorithms to derive equations defining the shape of such features.

Previously, lots of research and computing power went into attempting to reconstruct the 3D models from these point clouds.  It’s interesting to see a complete about-face in the industry as now they simply visualize the raw point-clouds, which has several advantages:

  • Level of Detail is a breeze: Simply bin the points.
  • Various size points can easily be generated via pixel/vertex shaders
  • It’s a lot easier to add interpolated points if you zoom too close

Add in some interested fake-lighting effects and at a far enough distance (like the image above) you can’t even tell it’s nothing but points.  I’ve seen this slowly growing over the last year or two, and at SuperComputing10 this year I even took part in a demonstration at the Idaho National Labs booth of not just LiDAR data but also incredibly high-resolution CT and MRI data.

Oliver Kreylos’ Research and Development Homepage – LiDAR Visualization.