Automotive radars are increasingly used in automated driving systems due to their cost effectiveness, ease of integration, and ability to withstand adverse weather conditions. Semantic segmentation for radar point clouds is a crucial step in radar pre-processing, which can be use
...
Automotive radars are increasingly used in automated driving systems due to their cost effectiveness, ease of integration, and ability to withstand adverse weather conditions. Semantic segmentation for radar point clouds is a crucial step in radar pre-processing, which can be used on almost all downstream tasks of radars, such as detection and tracking. Radar point clouds are noisier compared to LiDAR point clouds due to sensor noise and the multi-path propagation, which makes the segmentation task for radar more challenging. In this paper, we address the problem of segmentation in noisy radar point clouds in terms of ghost target vs. real detection, moving vs. static objects as well as semantic segmentation of moving road users. We demonstrate how these three tasks can be performed in a single, unified pipeline using an auto-labeled radar dataset. Our approach, called Real, Moving, and Semantic Segmentation Network (RMSNet), is able to output point-wise labels for all three tasks simultaneously. On our dataset, RMSNet attains an IoU of 82.5% for real detection segmentation, 66.9% IoU for moving object segmentation, and 64.9% mIoU for semantic segmentation. We also did a live demonstration, using the model on two different radars. The result shows that the model is capable of running inference in real-time and demonstrates good ability in model generalization.