Parameter continuity in time-varying Gauss–Markov models for learning from small training data sets

Parameter continuity in time-varying Gauss–Markov models for learning from small training data sets

Result of project: Cluster 4.0 – Methodology of System Integration
ID project: EF16_026/0008432
Authors: Martin Ron, Pavel Burget, Václav Hlaváč
Published in: ELSEVIER: Information Sciences
Volume 595, May 2022, Pages 197-216
Link: https://www.sciencedirect.com/science/article/abs/pii/S0020025522001724
DOI: https://doi.org/10.1016/j.ins.2022.02.037


Abstract:

The Linear time-invariant dynamic models are widely adopted in the industry. In the machine learning domain, such models are known as time-invariant continuous-state hidden Gauss–Markov models. Their super-class, the linear time-varying dynamic models, have relatively sparse applications as predictive models and classifiers of time series. This is typically due to the model complexity and the need for a significantly larger training set than time-invariant models. Without a large training set, a better modeling performance is counteracted by a less robust model. In this paper, we propose the continuity preference of the time-varying parameters of the model, which significantly reduces the required amount of training data while maintaining the modeling performance. We also derive a simple modification of the Expectation–Maximization algorithm incorporating continuity in parameters. The modified algorithm shows robust learning performance. The model performance is demonstrated by experiments on real 6-axis robotic manipulators in a laboratory, the Skoda Auto car producer body shop, and also on a public benchmark data set.

This project has received funding from the Ministry of Education, Youth and Sports, program Operational Programme Research, Development and Education under agreement No. EF16_026/0008432.