GEOG 481
Topographic Mapping with Lidar

History of Lidar Development

PrintPrint

Modern laser-based remote sensing began in the 1970s with efforts by NASA working with airborne prototypes for eventual spaceborne sensor deployment. These efforts were largely aimed at measuring properties of the atmosphere and ocean water, forest canopy, and ice sheets, not for topographic mapping. These scientific uses of lidar continue to evolve, but they will not be discussed in this course. Scientific investigations at Stuttgart University proved the high geometric accuracy of a laser profiler system, but at that time (mid-1980s) the lack of a reliable commercial GPS/IMU solution for sensor positioning presented a significant roadblock to further development.

The demand for GPS/IMU systems for use in aerial photogrammetry spurred rapid development of these direct georeferencing technologies. Companies that provided ground GPS survey equipment and services developed new airborne kinematic GPS solutions. The GPS satellite constellation reached full configuration providing the coverage needed for widespread operations. High-accuracy inertial measurement units became available as certain military missile guidance systems were declassified. By the mid-1990s, laser scanner manufacturers were delivering lidar sensors capable of 2,000 to 25,000 pulses per second to commercial customers intending to use them exclusively for topographic mapping applications. Although primitive by today’s standards, these instruments in the late-1990s were robust enough to validate the growing belief in lidar technology as the "way of the future." Lidar systems of that time were already delivering incredibly dense data sets, the likes of which could never be achieved by ground survey or photogrammetry. The geospatial user community became extremely interested in lidar data, not only for its capabilities to map the bare earth surface, but also for its promise for feature extraction (buildings and roads) as forest canopy characterization.

When lidar data was introduced to the mapping community, high-resolution terrain and feature data were generally produced using photogrammetry. Lower-resolution products were produced with radar or spaceborne stereo imagery. Photogrammetry is an inferential technology, that is, the features must be “seen” to be mapped. Radar, although very efficient for large areas and unique in its ability to penetrate cloud cover, is expensive to mobilize and requires very specialized expertise for data processing and interpretation. Radar also has certain limitations for measuring ground elevations beneath forest canopy and exhibits peculiar artifacts in very steep terrain and dense urban areas. The ground coverage of an airborne lidar sensor is very similar to that of a traditional aerial camera, so photogrammetric methods of flight planning could be directly applied to lidar. Lidar is also capable of "seeing" between trees in forested areas, where photogrammetric technicians have difficulty interpreting the elevation of the ground, and the development of end products from the dense lidar mass points is much like photogrammetric data processing. Lidar presented a fast, accurate, and direct (not inferential) method of generating 3-dimensional data, so, as the cost of instruments and services stabilized, it quickly became a very attractive mapping solution.

The demand for the characteristically dense lidar point data sets accelerated rapidly; however, CAD and GIS software in the early 2000s was not capable of efficiently processing such volumes of data. The early 2000s were marked by rapid improvements in data processing systems and supporting IT architecture required to handle the terabytes of data being produced by state-of-the-art scanners. This period can be characterized by:

  • robust, stable scanners capable of collecting over 50,000+ pulses per second;
  • reliable airborne GPS and inertial measurement systems for consistently accurate georeferencing;
  • capture and processing of lidar intensity (reflectance) measurements which could be used to generate raster images;
  • improved GIS and CAD handling of lidar point data;
  • lidar-specific software for raw data processing, derivative product generation, and quality control/quality assurance;
  • new companies that specialized in lidar collection and processing in niche market areas, such as high accuracy power line mapping;
  • robust IT infrastructure, including faster processors, distributed processing, and massive, yet affordable, storage solutions.

As the demand for lidar data grew, so did the need for guidelines, technical specifications, and accuracy standards. US government entities, specifically FEMA, the US Army Corps of Engineers (USACE), USGS, and the Federal Geographic Data Committee (FGDC) developed standards for quality assurance and accuracy reporting. Professional associations, such as the International and American Societies for Photogrammetry and Remote Sensing (ISPRS and ASPRS) provided venues for the swift exchange of science- and applications-based research in the use of lidar data in many application domains. Although there are no public standards yet available for lidar data or derived products, the ASPRS has developed the Lidar Archive Standard (LAS), for binary data exchange of lidar data which has been widely accepted by sensor manufacturers, software developers, and the end-user community.

Currently, there are over 200 lidar systems operating in the world. State-of-the-art systems are capable of 250,000 pulses per second, managing multiple pulses in the air at any given moment, capturing multiple returns from individual pulses, or even digitizing the entire return waveform. Data collection can be customized to meet specific application requirements, and end-users are supported by robust quality assurance methods and large data storage capacity. Trends in future lidar system development will be discussed later in this lesson.

Read More

You can explore these additional articles for more information about the history of lidar sensor development: