Approach | Reduction ratio | Rel Max Deviation |
---|---|---|

Teraki | 7.8: 1 | 2.3% |

Subsampling | 8.0: 1 | 24.5% |

Fourier Filtering | 7.8: 1 | 17.4% |

PLA | 7.8: 1 | 17.6% |

For this signal we can observe that the Teraki approach is able to achieve a factor of ~8-10 smaller maximum deviations than the other lossy techniques at approximately the same reduction (~8:1 reduction ratio). The relative maximum deviations achieved at this level of reduction by techniques such as Sub-sampling, Fourier Filtering or PLA (~17-24 % which correspond to deviations of ~50-100 sensor units as displayed in the raw-rec difference plots in yellow) might be too large for analytics use case. For example, interesting time patterns in acceleration might be of the order of a few sensor units (<10). If this signal is required to be reconstructed at this level, then the parameters of the Sub-sampling, Fourier Filtering or PLA techniques have to be changed which substantially decreases the achieved reduction.

Teraki Intelligent Edge Processing on the other hand is able to keep the reconstruction error in this range and still achieve higher reductions.

Let's look now at a velocity signal:

Approach | Reduction ratio | Rel Max Deviation |
---|---|---|

Teraki | 58.8: 1 | 0.7% |

Subsampling | 50.0: 1 | 6.4% |

Fourier Filtering | 55.6: 1 | 5.8% |

PLA | 58.8: 1 | 12.6% |

- An efficient utilization of memory resources (example: to process 50 signals at 100Hz, Teraki requires less than 8 kB of RAM/ROM)
- Better performance in specific edge analytics tasks when working directly on reduced data (Example: 15% higher accuracy for a crash classification use case)

These will be the topics of future blogs.

** SUMMARY **

This blog will conclude with an overview of the points made in the qualitative comparisons and the measurements above and sum up why Teraki Intelligent Edge Processing is best fit to pre-process time series data from sensors in cars:

** Higher accuracy: ** Compared to lossy techniques: A factor of ~10 to 100 smaller maximum deviations than lossy techniques for the same reduction.

** Meaningful accuracy: ** Compared to lossless techniques, the Teraki accuracy stays within the sensor noise. This means that the ‘higher' accuracy rate of lossless techniques carries no value or relevance. The extra accuracy of lossless techniques is ‘useless' whereas these techniques require much higher local RAM and CPU.

** Higher accuracy: ** Compared to lossy techniques: A factor of ~10 to 100 smaller maximum deviations than lossy techniques for the same reduction.

** Hardware fit. ** Compared to lossless techniques: the solution fits in constrained RAM — commonly found in Automotive — where compression software just does not fit.

** Automotive grade: ** Teraki's technology is a deterministic technology, which means that outcomes can be traced back. As opposed to e.g. Neural Networks, this makes the Teraki technology certifiable and hence applicable for safety related applications.

** Quicker: ** As less data points need to be processed, this results in lower processing (and transmitting) times; hereby lowering overall application latency.

Would you like to be informed on the state-of-the-art in edge processing and automotive applications, sign up for Teraki's fortnighly Newsletter. Sign Up now