最最先进的机器的汽车感觉系统与一位人的司机理解交通环境的方法相当不同。首先,人主要通过视觉感觉从交通景色吸收信息,当交通环境的机器感觉需要从满足安全批评的要求的传感器的几种不同类型熔化信息时。第二,当一位富有经验的人的司机与动态交通环境,在机器,感觉能容易生产吵闹的感觉结果工作很好时,一辆机器的汽车为它的自治开车要求将近百分之百正确的感觉结果。在这份报纸,我们建议为自治开车的一条交通环境感觉途径熔化框架的一个以视觉为中心的多传感器,它为有效自我本地化和障碍感觉经由几何、语义的限制一致地熔化照相机,激光雷达,和 GIS 信息。我们也讨论与框架是成功地综合的并且探讨机器视觉技术的多重层次的柔韧的机器视觉算法,从收集训练数据,高效地处理传感器数据,并且提取低级特征,到印射的高度级的目标和环境。建议框架与我们的自我开发的机器的汽车在实际城市的场面广泛地被测试八年了。实验结果验证它的坚韧性和效率。
Most state-of-the-art robotic cars' perception systems are quite different from the way a human driver understands traffic environments. First, humans assimilate information from the traffic scene mainly through visual perception, while the machine perception of traffic environments needs to fuse information from several different kinds of sensors to meet safety-critical requirements. Second, a robotic car requires nearly 100% correct perception results for its autonomous driving, while an experienced human driver works well with dynamic traffic environments, in which machine perception could easily produce noisy perception results. In this paper, we propose a vision-centered multi-sensor fusing framework for a traffic environment perception approach to autonomous driving, which fuses camera, LIDAR, and GIS information consistently via both geometrical and semantic constraints for efficient self- localization and obstacle perception. We also discuss robust machine vision algorithms that have been successfully integrated with the framework and address multiple levels of machine vision techniques, from collecting training data, efficiently processing sensor data, and extracting low-level features, to higher-level object and environment mapping. The proposed framework has been tested extensively in actual urban scenes with our self-developed robotic cars for eight years. The empirical results validate its robustness and efficiency.