Jump to content

Speed Tracking Control Using Model-Based Reinforcement Learning in a Real Vehicle

Fast facts

  • Internal authorship

  • Further publishers

    Luca Puccetti, Ahmed Yasser, Christian Rathgeber, Soren Hohmann

  • Publishment

    • 2021
    • Volume 2021 IEEE Intelligent Vehicles Symposium (IV)
  • Organizational unit

  • Subjects

    • Informationswissenschaft
    • Ingenieurinformatik/Technische Informatik
  • Publication format

    Conference paper

Content

Reinforcement learning is a promising method
for automated tuning of controllers, but is yet rarely applied to real systems like longitudinal vehicle control, since it struggles in the face of real-time tasks, noise, partially observed dynamics and delays. We propose a model-based reinforcement learning algorithm for the task of speed tracking control on constrained hardware. In order to cope with partially observed dynamics, delay and noise our algorithm relies on an autoregressive model with external inputs (ARX model) that is learned using a decaying step size. The output controller is updated by policy search on the learned model. Multiple experiments show that the proposed algorithm is capable of learning a controller in a real vehicle in different speed ranges and with a variety of exploration noise distribution and amplitudes. The results show that the proposed approach yields similar results to a recently published model-free reinforcement learning method in most conditions, e.g. when adapting the controller to very low speeds, but succeeds to learn with a wider variety of exploration noise types.

About the publication

Notes and references

This site uses cookies to ensure the functionality of the website and to collect statistical data. You can object to the statistical collection via the data protection settings (opt-out).

Settings(Opens in a new tab)