Inhalt anspringen

Speed Tracking Control Using Model-Based Reinforcement Learning in a Real Vehicle

Schnelle Fakten

  • Interne Autorenschaft

  • Weitere Publizierende

    Luca Puccetti, Ahmed Yasser, Christian Rathgeber, Soren Hohmann

  • Veröffentlichung

    • 2021
    • Band 2021 IEEE Intelligent Vehicles Symposium (IV)
  • Organisationseinheit

  • Fachgebiete

    • Informationswissenschaft
    • Ingenieurinformatik/Technische Informatik
  • Format

    Konferenzpaper

Abstract

Reinforcement Learning is a promising method
for automated tuning of controllers, but is yet rarely applied to real systems like longitudinal vehicle control, since it struggles in the face of real-time tasks, noise, partially observed dynamics and delays. We propose a model-based reinforcement learning algorithm for the task of speed tracking control on constrained hardware. In order to cope with partially observed dynamics, delay and noise our algorithm relies on an autoregressive model with external inputs (ARX model) that is learned using a decaying step size. The output controller is updated by policy search on the learned model. Multiple experiments show that the proposed algorithm is capable of learning a controller in a real vehicle in different speed ranges and with a variety of exploration noise distribution and amplitudes. The results show that the proposed approach yields similar results to a recently published model-free reinforcement learning method in most conditions, e.g. when adapting the controller to very low speeds, but succeeds to learn with a wider variety of exploration noise types.

Über die Publikation

Erläuterungen und Hinweise

Diese Seite verwendet Cookies, um die Funktionalität der Webseite zu gewährleisten und statistische Daten zu erheben. Sie können der statistischen Erhebung über die Datenschutzeinstellungen widersprechen (Opt-Out).

Einstellungen (Öffnet in einem neuen Tab)