Enhancing Dimensionality Reduction in Driving Behavior Learning: Integrating SENet with VAE

Yuta Uehara, Susumu Matsumae

Abstract


This study addresses a common limitation of conventional Variational Autoencoder (VAE)-based methods in dimensionality reduction for state representation learning, especially in autonomous driving, by integrating Squeeze-and-Excitation Networks (SENet) into the VAE framework. While traditional VAE approaches effectively handle high-dimensional data with reduced computational costs, they often struggle to adequately capture complex features in certain tasks. To overcome this challenge, we propose the SENet-VAE model, which incorporates SENet into the VAE architecture, and evaluate its performance in driving behavior learning using deep reinforcement learning. Our experiments compare three setups: raw image data, conventional VAE, and SENet-VAE. Furthermore, we examine how the placement and number of SE-Blocks affect performance. The results demonstrate that SENet-VAE surpasses the limitations of conventional VAE and achieves superior accuracy in learning. This work highlights the potential of SENet-VAE as a robust dimensionality reduction solution for state representation learning.

Keywords


Squeeze-and-excitation network; Variational autoencoder; Dimensionality reduction; Autonomous driving; Deep reinforcement learning

Full Text:

PDF

Refbacks

  • There are currently no refbacks.