Selective Sensor Fusion Strategies for Depth Estimation in Fog Environment
Sensor Fusion, Depth Estimation, Fog Environment
Deep learning-based depth estimation has been studied extensively for perceiving and understanding the surrounding environment. Due to physical limitations and the sensitivity of the measurement results on the scene characteristics and environmental conditions of individual sensors, the performance of depth estimation is insufficient in many applications where only a single type of sensor data is applied. To tackle this issue, the fusion of multiple sensor modalities has been studied as a promising solution, especially in the fog environment.
In this work, the student needs to investigate the selective sensor fusion strategies (camera, LiDAR, and radar) under different fog concentrations using deep learning-based methods.
- High motivation to learn and conduct research
- Good programming skills in Python, Pytorch, Linux
- Basic experience with deep learning, neural network
(Please attach your CV and transcript to your application)