Apollo 10.0
自动驾驶开放平台
|
The multi-sensor fusion module fuses the output results of Lidar, Camera, and Radar multiple sensors to make the detection results more reliable.
It uses post-processing technology, the algorithm used is probabilistic fusion.
apollo::perception::fusion::MultiSensorFusionComponent
Input channel | Type | Description |
---|---|---|
/perception/inner/PrefusedObjects | apollo::perception::onboard::SensorFrameMessage | frame contains object detection |
>Note: The input channel is structure type data. The default trigger channel is /perception/inner/PrefusedObjects
. The detailed input channel information is in modules/perception/multi_sensor_fusion/dag/multi_sensor_fusion.dag
file. By default, the upstream components of the messages received by the component include lidar_detection_tracking
, camera_tracking
, radar_detection
.
Output channel | Type | Description |
---|---|---|
/apollo/perception/obstacles | apollo::perception::PerceptionObstacles | detection results after fusion |
>Note: The output channel is proto type data. The detailed output channel information is in modules/perception/multi_sensor_fusion/conf/multi_sensor_fusion_config.pb.txt
file.
The multi-sensor fusion module does not support running alone, it needs to run together with lidar, camera and radar detection modules.
You can use the following command to start the whole perception function, including lidar, camera, and radar target detection, and output their results after fusion.
如果您在使用文档的过程中,遇到任何问题,请到我们在【开发者社区】建立的 反馈意见收集问答页面,反馈相关的问题。我们会根据反馈意见对文档进行迭代优化。