LiDAR & Camera: a strong marriage


Light Detection and Ranging or otherwise, LiDAR is an instrument that measures distance to an object by sending a laser light (often using wavelength 905 or 1550 nm) to an object and measure the reflected light with a sensor. The differences in laser travel times between the wavelengths are used to calculate and form a digital 3-D representations of the object. Such representation is called a (3D) point cloud.


The camera is an image sensor that detects the incoming light through a lens using an electronic equipment (usually of 2 types charge-couple-device CCD or complementary metal-oxide-semiconductor CMOS image sensor). It does so by converting the variable attenuation of light waves into signals of current that convey the visual information of an object.

Complementing mutual strengths

Each sensor that captures reality, has its advantages and disadvantages. In this case, LiDAR is an accurate sensor to measure distance. It can detect and position objects in a large field of view from closeby to up to 200 meters and with a high level of accuracy in distance. operation and accuracy is almost not affected by adverse weather conditions such as fog or heavy rainfall. However, recognition of what exactly an object is, is harder to do from a point cloud. For that the level of visual detail and the absence of colour is insufficient.

The camera on the other hand, can capture visual reality and clearly show details as well colours. These characteristics make object recognition very precise. However, even when using stereo camera, the positional details - such as the distance and depths - aren’t comparable to the accuracy of LiDARs. In addition, a visual sensor is adversely impacted by strong shaded areas or a low sun angle that deteriorate its successful detection rates.

Comparision of point cloud and image

Comparision of point cloud and image

The above image shows the complementary nature of LiDAR and camera working together. The LiDAR detecting objects (along with exact distance) from the car and does not depend on the light exposure of the objects (to which the camera is very dependent on). As clearly displayed in figure [1], the LiDAR easily identifies the pedestrian to the right who was hard to be identified by the camera due to the strong shade. While the camera’s precision in detail captures well the traffic light that the LiDAR is not able to effectively identify. Likewise, in figure [2], the LiDAR identifies the objects that obstruct the way, however without the camera, its almost impossible to differentiate between pedestrians and plants.

The Fast and Accurate

For the purposes of driver assistance and autonomous driving in the automotive world, the highest detail and accuracy is required. In addition, this needs to happen quickly in real-time to enable safe operation. In terms of knowing the surroundings. It is essential that the car knows what objects it encounters and exactly where these objects are. This needs to happen incredibly accurate and fast. For this purpose, most engineers have chosen to use the joint forces of LiDAR and camera together. The LiDAR to precisely locate the objects closeby and far away. The camera to identify these objects on what they are; e.g. a car, pedestrian, road signs, or traffic lights. In digital terms, the higher the level of detail one wants to acquire, the larger the sensor resolution needs to be. High sensor resolution means a lot of raw data. LiDAR and camera produce respectively rich data as point clouds and image streams. The size of these datasets is massive, and usually too large to be handled by the embedded systems present in the car. As the generation rate is governed by driving, the data is consistently generated and becomes too energy consuming and too heavy to be processed and stored.

Relay Race

To truly benefit from this technology, Teraki has devised ways to combine the inputs of LiDAR and camera together, synchronizing them in real time, and applying LiDARs data as a first input for the camera to work with. This is called “Sensor Fusion”. Effective but accurate compression of relevant data is done in the point cloud to detect objects. Subsequently this object is ‘given’ to the camera where Region of Interest selection is done to have highest resolution on the objects that matter. Highest resolution will lead to higher detection scores, whilst the Region of Interest technology also ensures that the processing can be done at low latency and with low powered hardware. The initial object segmentation makes it quicker and effective to extract the most relevant content recorded. The compressed video data makes it easy to transfer and store data in the cloud. It’s like passing the batton in a relay race: from one sensor to another.

Good Marriage

The lidar and camera make a good marriage as each brings in strong points that are complementary to each other. However these are two very data intensive sensors, putting high challenges on CPU-costs, energy consumption and real-time performance.

With Teraki embedded technology lidar data is reduced with 90% - 95% without an impact on object detection rate. Once an object is detected, it is handed-over to the camera sensor. There Teraki ‘zooms in’ on the lidar detected object and makes sure that video data has highest resolution whereas other areas can be compressed significantly. Result: efficient use of hardware and applications running quickly without loss of detection quality.

Share this Post: