Content
Reinforcement learning is a promising method
for automated tuning of controllers, but is yet rarely applied to real systems like longitudinal vehicle control, since it struggles in the face of real-time tasks, noise, partially observed dynamics and delays. We propose a model-based reinforcement learning algorithm for the task of speed tracking control on constrained hardware. In order to cope with partially observed dynamics, delay and noise our algorithm relies on an autoregressive model with external inputs (ARX model) that is learned using a decaying step size. The output controller is updated by policy search on the learned model. Multiple experiments show that the proposed algorithm is capable of learning a controller in a real vehicle in different speed ranges and with a variety of exploration noise distribution and amplitudes. The results show that the proposed approach yields similar results to a recently published model-free reinforcement learning method in most conditions, e.g. when adapting the controller to very low speeds, but succeeds to learn with a wider variety of exploration noise types.