Spatial Calibration of Diffuse LiDARs

2026-03-06Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionRobotics
AI summary

The authors address a problem with a type of LiDAR sensor that measures distance by collecting many light signals from a wide area, which makes it hard to match up with regular camera images. They created a straightforward way to figure out exactly which parts of a scene each LiDAR pixel is actually measuring and how sensitive it is in different spots. By scanning a special reflective patch and removing background noise, they map LiDAR pixels to specific pixels in a camera image. This helps combine LiDAR and camera data more accurately. They tested their method on a real sensor model called the ams OSRAM TMF8828.

Diffuse direct time-of-flight LiDARDepth histogramSingle-ray assumptionLiDAR-RGB calibrationSpatial calibrationFootprintSpatial sensitivityRetroreflective patchCross-modal alignmentams OSRAM TMF8828
Authors
Nikhil Behari, Ramesh Raskar
Abstract
Diffuse direct time-of-flight LiDARs report per-pixel depth histograms formed by aggregating photon returns over a wide instantaneous field of view, violating the single-ray assumption behind standard LiDAR-RGB calibration. We present a simple spatial calibration procedure that estimates, for each diffuse LiDAR pixel, its footprint (effective support region) and relative spatial sensitivity in a co-located RGB image plane. Using a scanned retroreflective patch with background subtraction, we recover per-pixel response maps that provide an explicit LiDAR-to-RGB correspondence for cross-modal alignment and fusion. We demonstrate the method on the ams OSRAM TMF8828.