环境感知是无人车夜间行驶中的一项关键任务,提出一种改进的YOLOv3网络,以实现夜间对无人车获取的红外图像中行人、车辆的检测,将判断周边车辆的行驶方向问题转化为预测车辆位置的角度大小问题,并与深度估计信息进行融合对周边车辆行驶的距离和速度作出判断,从而实现夜间无人车对周边车辆行驶意图的感知。该网络具有端到端的优点,能实现整张图像作为网络的输入,直接在输出层回归检测目标的边界框位置、所属的类别和车辆的角度预测结果,并和深度估计信息融合得到周边车辆的距离和速度信息。实验结果表明,使用改进的YOLOv3网络对夜间无人车获取的红外图像进行目标检测的时间为0.04 s/帧,角度和速度预测效果较好,准确性和实时性达到了实际应用要求。
Abstract
Environmental perception is a key task of driverless vehicles at night. An improved YOLOv3 network was proposed to realize the detection of pedestrians and vehicles in infrared images captured by driverless vehicles at night. The problem of estimation of the moving direction of surrounding vehicles is transformed into the problem of estimation of the angle of the surrounding vehicle position. What's more, the network is fused with the depth estimation information to estimate the distance and speed of the surrounding vehicles. Therefore the driverless vehicles can obtain the driving intention of the surrounding vehicles at night. The network has the end-to-end advantage, in which an image is as the input, and the positions of the bounding boxes, the classes and the angle estimation results of the detecting targets are returned directly at the output layer. Moreover, the depth estimation information is combined with the above information to obtain the distance and speed of the surrounding vehicle. The experimental results show that the speed of target detection in the infrared images captured by driverless vehicle is 0.04 s/frame. The effect of angle and speed prediction is good, and the accuracy and real-time performance meet the requirements of practical application.
关键词
目标检测 /
YOLOv3网络 /
角度预测 /
深度估计 /
红外图像
{{custom_keyword}} /
Key words
infrared image /
target detection /
angle prediction /
YOLOv3 network /
depth estimation
{{custom_keyword}} /
基金
上海市科委基础研究项目(15JC1400600);国家青年自然科学基金(61603089);上海市青年科技英才扬帆计划(16YF1400100)
{{custom_fund}}
{{custom_sec.title}}
{{custom_sec.title}}
{{custom_sec.content}}
参考文献
陈琪.面向无人驾驶的环境感知技术研究[J].科技经济导刊, 2018, 26(26):77.
邹芳喻, 孙韶媛, 席林, 等.具有深度视觉感的车载红外图像彩色化方法[J].激光与光电子学进展, 2013, 50(1):011101.
高凯珺, 孙韶媛, 姚广顺, 等.基于深度学习的无人车夜视图像语义分割[J].应用光学, 2017, 38(03):421-428.
许路, 赵海涛, 孙韶媛.基于深层卷积神经网络的单目红外图像深度估计[J].光学学报, 2016, 36(7):0715002.
姚广顺, 孙韶媛, 方建安, 等.基于红外与雷达的夜间无人车场景深度估计[J].激光与光电子学进展, 2017, 54(12):164-170.
KIESER D. Driverless vehicles and their impact on your portfolio[J]. Equity, 2018, 32(1): 4.
CHEN Qi. Research on environment-aware technology for unmanned driving[J]. Science and Technology Economic Guide, 2018, 26 (26): 77.
ZOU Fangyu, SUN Shaoyuan, XI Lin, et al. A method of colorization of vehicle infrared image with deep visual sense[J]. Laser & Optoelectronics Progress, 2013, 50(1):011101.
GAO Kaijun, SUN Shaoyuan, YAO Guangshun, et al. Semantic segmentation of night vision images for unmanned vehicles based on deep learning[J]. Journal of Applied Optics, 2017, 38(03):421-428.
ANDREOPOULOS A, TSOTSOS J. 50 years of object recognition: directions forward[J]. Computer Vision and Image Understanding, 2013, 117(8):827-891
REN Shaoqing, HE Kaiming, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[C]//Advances in Newral Information Processing Systems.[s.n.]: [S.l.], 2015: 91-99.
REDMON J, DIVVALA S, GIRSHICK R, et al. You only look o
{{custom_fnGroup.title_cn}}
脚注
{{custom_fn.content}}