Stable Adaptive Model Learning of Neural Networks
Since batch algorithms suffer from lack of proficiency in confronting model mismatches and disturbances, this contribution proposes an adaptive scheme based on continuous Lyapunov function for online robot dynamic identification. This paper suggests stable updating rules to drive neural networks inspiring from model reference adaptive paradigm. Network structure consists of three parallel self-driving neural networks which aim to estimate robot dynamic terms individually. Lyapunov candidate is selected to construct energy surface for a convex optimization framework. Learning rules are driven directly from Lyapunov functions to make the derivative negative. Finally, experimental results on 3-DOF Phantom Omni Haptic device demonstrate efficiency of the proposed method.
For reference:
@inproceedings{agand2019adaptive, title={Adaptive model learning of neural networks with UUB stability for robot dynamic estimation}, author={Agand, Pedram and Shoorehdeli, Mahdi Aliyari}, booktitle={2019 International Joint Conference on Neural Networks (IJCNN)}, pages={1--6}, year={2019}, organization={IEEE} }
Interesting architecture. How are we making sure that other networks are not learning other components from this network?
Thanks for your question. There are actually augmentation errors which confine the feasible set. Also with a rich training dataset, we can avoid sticking in local minimums.