What are the best algorithms for processing automotive sensor data on the edge?

The amount of sensor data generated in the automotive industry is rapidly increasing and therefore the spectrum of analytical tasks that can be performed with this data is rapidly expanding too. See some use cases examples in our previous blog. The data generated in a car can be either transmitted to the cloud and analyzed there or analyzed directly at the edge. On this topic we wrote a blog discussing the importance of Edge Analytics. However, whether processing in the car or in the cloud, both options face the severe challenge of how to handle large data volumes. The first solution that comes to mind is to apply existing data compression techniques.

Previously we wrote on the Teraki Intelligent Edge Processing efficiently handling video data streams: Teraki’s “ROI” technology for video.. In this blog we present a data processing solution that is suited for telematics (i.e. time-series) data and compare its performance with respect to conventional data compression techniques.

In order to talk about the performance of these techniques we should consider several key performance indicators and the inherent trade-off between them:

  • Reduction (Goal: Maximize)
  • Accuracy (Goal: Maximize)
  • Memory usage (Goal: Minimize)
  • Latency (Goal: Minimize)

Lossless compression libraries, such as zlib, can achieve data reductions ratios on the range 2:1 to 5:1 depending on the input data. However, they require memory usage in the range of several hundred kB which severely limits its implementation into memory constrained devices where typically only a few kB is available. Such clear hardware constraints are found in e.g. the automotive, industrial robotics and drone industries.

Conventional lossy compression techniques such as Sub-sampling, Fourier Filtering, or PLA (Piecewise Linear Approximation) can achieve high reductions depending on their specific parameters. However, the reconstruction error is typically not controlled in a direct way but rather several parameters should be tweaked to reach a desired value on a particular dataset. This procedure however does not guarantee that the reconstruction error will be bounded in a different dataset. For use cases in safety-critical automotive applications such as crash detection or predictive maintenance this is not a desired feature.

Teraki solution for time series, on the other hand, offers the user to specify the maximum deviation (in sensor units) of the reconstructed signal with respect to the raw signal. The resulting data reduction is then explicitly dependent on the specified maximum deviation: the larger the allowed deviations, the higher the reduction will be. Having control over the exact allowed accuracy threshold is essential for automotive applications.

In addition, Teraki Intelligent Edge Processing offers a better trade off between reduction and reconstruction error when compared with conventional lossy techniques. This means that higher reductions can be reached at the same reconstruction error or smaller reconstruction errors for the same reduction.

In order to illustrate this feature, we show next the achieved reduction and reconstruction errors for the different mentioned lossy techniques. We use as an example data from two sensors of typical use in automotive industry applications: acceleration and velocity signals (sensor units are arbitrary). Specific parameters for each technique were tweaked to reach comparable reductions to the one achieved by Teraki Intelligent Edge Processing. The deviation metric used in this example is the maximum deviation relative to the signal amplitude.

Below we will share some measurements we did on respectively acceleration and speed data of a car. In the upper section of the graphs (in blue and green) the raw data and the reconstructed data are shown. In the lower section (yellow) one can find the deviations between for each technique when performing at similar reduction rates. Higher peaks meaning higher deviations hence lower accuracy.

Devcenter User interface

Velocity signal

Approach Reduction Ratio Rel Max Deviation
Teraki 7.8: 1 2.3%
Subsampling 8.0: 1 24.5%
Fourier Filtering 7.8: 1 17.4%
PLA 7.8: 1 17.6%

For this signal we can observe that the Teraki approach is able to achieve a factor of ~8-10 smaller maximum deviations than the other lossy techniques at approximately the same reduction (~8:1 reduction ratio). The relative maximum deviations achieved at this level of reduction by techniques such as Sub-sampling, Fourier Filtering or PLA (~17-24 % which correspond to deviations of ~50-100 sensor units as displayed in the raw-rec difference plots in yellow) might be too large for analytics use case. For example, interesting time patterns in acceleration might be of the order of a few sensor units (<10). If this signal is required to be reconstructed at this level, then the parameters of the Sub-sampling, Fourier Filtering or PLA techniques have to be changed which substantially decreases the achieved reduction.

Teraki Intelligent Edge Processing on the other hand is able to keep the reconstruction error in this range and still achieve higher reductions.

Let’s look now at a velocity signal:

Velocity grapgs

Velocity signal

Approach Reduction Ratio Rel Max Deviation
Teraki 58.8: 1 0.7%
Subsampling 50.0: 1 6.4%
Fourier Filtering 55.6: 1 5.8%
PLA 58.8: 1 12.6%
  • An efficient utilization of memory resources (example: to process 50 signals at 100Hz, Teraki requires less than 8 kB of RAM/ROM)
  • Better performance in specific edge analytics tasks when working directly on reduced data (Example: 15% higher accuracy for a crash classification use case)

These will be the topics of future blogs.

SUMMARY

This blog will conclude with an overview of the points made in the qualitative comparisons and the measurements above and sum up why Teraki Intelligent Edge Processing is best fit to pre-process time series data from sensors in cars:

Higher accuracy: Compared to lossy techniques: A factor of ~10 to 100 smaller maximum deviations than lossy techniques for the same reduction.

Meaningful accuracy: Compared to lossless techniques, the Teraki accuracy stays within the sensor noise. This means that the ‘higher’ accuracy rate of lossless techniques carries no value or relevance. The extra accuracy of lossless techniques is ‘useless’ whereas these techniques require much higher local RAM and CPU.

Hardware fit: Compared to lossless techniques: the solution fits in constrained RAM — commonly found in Automotive — where compression software just does not fit.

Automotive grade: Teraki’s technology is a deterministic technology, which means that outcomes can be traced back. As opposed to e.g. Neural Networks, this makes the Teraki technology certifiable and hence applicable for safety related applications.

Quicker: As less data points need to be processed, this results in lower processing (and transmitting) times; hereby lowering overall application latency.

Would you like to be informed on the state-of-the-art in edge processing and automotive applications, sign up for Teraki’s fortnighly Newsletter. Sign Up now

Share this Post: