Special Issue: ICROS 2023 Conference

International Journal of Control, Automation, and Systems 2023; 21(11): 3528-3539

https://doi.org/10.1007/s12555-023-0378-7

© The International Journal of Control, Automation, and Systems

Enhancing Low-light Images for Monocular Visual Odometry in Challenging Lighting Conditions

Donggil You, Jihoon Jung, and Junghyun Oh*

Kwangwoon University

Abstract

Visual odometry (VO) estimates the robot’s current position based on feature matching or brightness variation between images, making it primarily suitable for well-lit environments with good image quality. Consequently, existing visual odometry methods exhibit degraded performance in low-light or highly dynamic environments, limiting their operational efficiency in outdoor settings. To overcome these challenges, research has been conducted to enhance low-light images to improve odometry performance. Recent advancements in deep learning have facilitated extensive research on image enhancement, including low-light conditions. Utilizing generative adversarial networks (GANs) and techniques like CycleGAN, researchers have achieved robust improvements in various lighting conditions and enhanced odometry performance in low-light environments. However, these methods are typically trained on single images, compromising the structural consistency between consecutive images. In this paper, we propose learning-based low-light image enhancement and the preservation of structural consistency between consecutive images for monocular visual odometry. The proposed model utilizes the CycleGAN approach for domain transformation between different illumination levels, effectively avoiding the failure of visual odometry in low-light environments. To handle diverse lighting conditions within images, a local discriminator is employed to enhance local brightness. Additionally, a structure loss is introduced using sequence images to ensure structural consistency between the original and generated images. This method simultaneously improves low-light conditions and preserves structural consistency, leading to enhanced visual odometry performance in low-light environments.

Keywords Deep learning, generative adversarial network, low-light image enhancement, style transfer, visual odometry.

Article

Special Issue: ICROS 2023 Conference

International Journal of Control, Automation, and Systems 2023; 21(11): 3528-3539

Published online November 1, 2023 https://doi.org/10.1007/s12555-023-0378-7

Copyright © The International Journal of Control, Automation, and Systems.

Enhancing Low-light Images for Monocular Visual Odometry in Challenging Lighting Conditions

Donggil You, Jihoon Jung, and Junghyun Oh*

Kwangwoon University

Abstract

Visual odometry (VO) estimates the robot’s current position based on feature matching or brightness variation between images, making it primarily suitable for well-lit environments with good image quality. Consequently, existing visual odometry methods exhibit degraded performance in low-light or highly dynamic environments, limiting their operational efficiency in outdoor settings. To overcome these challenges, research has been conducted to enhance low-light images to improve odometry performance. Recent advancements in deep learning have facilitated extensive research on image enhancement, including low-light conditions. Utilizing generative adversarial networks (GANs) and techniques like CycleGAN, researchers have achieved robust improvements in various lighting conditions and enhanced odometry performance in low-light environments. However, these methods are typically trained on single images, compromising the structural consistency between consecutive images. In this paper, we propose learning-based low-light image enhancement and the preservation of structural consistency between consecutive images for monocular visual odometry. The proposed model utilizes the CycleGAN approach for domain transformation between different illumination levels, effectively avoiding the failure of visual odometry in low-light environments. To handle diverse lighting conditions within images, a local discriminator is employed to enhance local brightness. Additionally, a structure loss is introduced using sequence images to ensure structural consistency between the original and generated images. This method simultaneously improves low-light conditions and preserves structural consistency, leading to enhanced visual odometry performance in low-light environments.

Keywords: Deep learning, generative adversarial network, low-light image enhancement, style transfer, visual odometry.

IJCAS
June 2024

Vol. 22, No. 6, pp. 1761~2054

Stats or Metrics

Share this article on

  • line

Related articles in IJCAS

IJCAS

eISSN 2005-4092
pISSN 1598-6446