Regular Papers

International Journal of Control, Automation, and Systems 2025; 23(1): 346-357

https://doi.org/10.1007/s12555-024-0495-y

© The International Journal of Control, Automation, and Systems

Perceptual Enhancement for Unsupervised Monocular Visual Odometry

Zhongyi Wang*, Mengjiao Shen, Chengju Liu, and Qijun Chen

Tongji University

Abstract

Visual odometry is pivotal in robotics and autonomous driving, serving as a key component of visual simultaneous localization and mapping technology. In real-world scenarios, humans in local low-light conditions perceive less information, which can impact our judgments and actions. Similarly, visual odometry can become confused under these conditions, leading to compromised performance. To address the challenges posed by local low-light images on monocular visual odometry, we propose an unsupervised framework for monocular visual odometry. To the best of our knowledge, this is the first instance of unsupervised monocular visual odometry and local low-light image enhancement accomplished within a unified framework. Initially, we employ retinex theory and the discrete Fourier transform to decompose, filter, and synthesize the original image. For the filtering process, we propose a novel learnable global filtering network. Subsequently, we input the enhanced images into the depth and pose networks, generating the corresponding depth maps and inter-frame poses. Ultimately, we construct a photometric consistency loss, a depth loss, and a novel low-light smoothness loss to train the entire network. Through experimental validation, our method exhibits superior performance on the KITTI dataset. Furthermore, it demonstrates satisfactory generalization ability in unseen environments from the Oxford RobotCar dataset.

Keywords Local low-light image, monocular visual odometry, perceptual enhancement, unsupervised learning.

Article

Regular Papers

International Journal of Control, Automation, and Systems 2025; 23(1): 346-357

Published online January 1, 2025 https://doi.org/10.1007/s12555-024-0495-y

Copyright © The International Journal of Control, Automation, and Systems.

Perceptual Enhancement for Unsupervised Monocular Visual Odometry

Zhongyi Wang*, Mengjiao Shen, Chengju Liu, and Qijun Chen

Tongji University

Abstract

Visual odometry is pivotal in robotics and autonomous driving, serving as a key component of visual simultaneous localization and mapping technology. In real-world scenarios, humans in local low-light conditions perceive less information, which can impact our judgments and actions. Similarly, visual odometry can become confused under these conditions, leading to compromised performance. To address the challenges posed by local low-light images on monocular visual odometry, we propose an unsupervised framework for monocular visual odometry. To the best of our knowledge, this is the first instance of unsupervised monocular visual odometry and local low-light image enhancement accomplished within a unified framework. Initially, we employ retinex theory and the discrete Fourier transform to decompose, filter, and synthesize the original image. For the filtering process, we propose a novel learnable global filtering network. Subsequently, we input the enhanced images into the depth and pose networks, generating the corresponding depth maps and inter-frame poses. Ultimately, we construct a photometric consistency loss, a depth loss, and a novel low-light smoothness loss to train the entire network. Through experimental validation, our method exhibits superior performance on the KITTI dataset. Furthermore, it demonstrates satisfactory generalization ability in unseen environments from the Oxford RobotCar dataset.

Keywords: Local low-light image, monocular visual odometry, perceptual enhancement, unsupervised learning.

IJCAS
January 2025

Vol. 23, No. 1, pp. 1~88

Stats or Metrics

Share this article on

  • line

IJCAS

eISSN 2005-4092
pISSN 1598-6446