A Comprehensive Review of Multi-Modal Fusion Perception in Autonomous Driving
Introduction Multi-modal fusion is a crucial task in the perception of autonomous driving systems. This article will detail the multi-modal perception methods for autonomous driving, including object detection and semantic segmentation tasks involving LiDAR and cameras. From the perspective of the fusion stage, existing solutions are categorized into data-level, feature-level, object-level, and asymmetric fusion. Furthermore, … Read more