
Research on Motion Blur Object Detection Technology for Imaging Guidance
ZHAOChunbo, MOBo, LIDawei, ZHAOJie
Sponsored by: China Association for Science and Technology (CAST)
Editor-In-Chief: Xu Yida
ISSN 1000-1093
Hosted By: China Ordnance Society
Published By: Acta Armamentarii
CN 11-2176/TJ
Research on Motion Blur Object Detection Technology for Imaging Guidance
To enhance the accuracy and efficiency of motion-blurred image object detection in missile-borne imaging guidance,this paper proposes a lighter and more effective motion-blurred image object detection (LEMBD) network.The causes of motion-blurred image are analyzed,and a dedicated motion-blurred image dataset is constructed based on the imaging mechanism.Without increasing network parameters,a shared-weight siamese network design is adopted,and the prior knowledge is introduced to extract the features of blurred images by the feature learning of clear images,thereby enabling the simultaneous detection of both clear and blurred images.Additionally,the partial depthwise separable convolutions are introduced to replace the standard convolutions,which significantly reduce the parameter count and computational cost while enhancing learning performance.To further improve the feature fusion quality,a cross-layer path aggregation feature pyramid network is designed to effectively leverage both the detail information of low-level features and the semantic information of high-level features.Experimental results demonstrate that the proposed LEMBD network achieves superior performance in detecting the targets within motion-blurred images compared to conventional object detection and state-of-the-art motion-blurred detection methods,which can provide more accurate relative positional information for precision guidance tasks.
accurate object detection / motion-blurring / lightweight / partial depth separable convolution / cross-layer path aggregation feature pyramid network {{custom_keyword}} /
Table 1 CPU/GPU and its related parameters表1 CPU/GPU及其相关参数 |
CPU参数 | 数值 | GPU参数 | 数值 |
---|---|---|---|
Intel(R) Core(TM) | i9-7980XE | NVIDIA GeForce RTX | 3090 |
基准速度 | 2.60GHz | 内存 | 24.0GB |
内核 | 18 | Pytorch版本 | 1.10.1 |
逻辑处理器 | 36 | CUDA版本 | 11.3 |
Table 2 Ablation test data of motion blur image object detection network表2 运动模糊图像目标检测网络消融实验 |
PIDSFENet | PDS | EPAFPN | CLPAFPN | 轿车 | 货车 | 卡车 | 公交车 | 厢式货车 | 召回率 | mAP0.5/% | 精确率 | 参数量/106 | 消耗算力/109 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
× | × | × | × | 85.7 | 42.7 | 56.7 | 90.6 | 44.5 | 61.1 | 64.1 | 65.4 | 2.01 | 5.0 |
√ | × | × | × | 88.7 | 50.4 | 62.4 | 92.2 | 48.0 | 63.7↑2.6 | 68.3↑4.2 | 71.3↑5.9 | 2.01 | 5.0 |
√ | √ | × | × | 89.0 | 47.2 | 64.7 | 92.8 | 49.6 | 62.3↑1.2 | 68.7↑4.6 | 75.1↑9.7 | 1.19↓40.5% | 3.3↓36% |
√ | √ | √ | × | 88.0 | 44.0 | 58.5 | 89.7 | 46.9 | 62.5↑1.4 | 65.4↑1.3 | 67.9↑2.5 | 1.28↓36% | 6.4↑28% |
√ | √ | × | √ | 88.5 | 47.2 | 59.1 | 90.5 | 48.4 | 62.0↑0.9 | 66.7↑2.6 | 71.8↑6.4 | 1.55↓22.5% | 8.1↑62% |
注:加粗数值为表现最佳指标,√表示启用该模块,×表示不启用该模块。 |
Table 3 Comparative experiments on similar algorithms for object detection in motion blurred images表3 运动模糊图像目标检测同类算法对比实验 |
算法 | 召回率 | mAP0.5/ % | 精确率 | 参数 量/106 | 消耗算 力/109 | 帧率/ (帧·s-1) |
---|---|---|---|---|---|---|
Spec By Exposure | 53.0 | 64.9 | 41.5 | 67.3 | 20.9 | |
TPH-YOLOv5++ | 67.4 | 74.6 | 79.9 | 7.18 | 29.9 | 59.5 |
LEMBD | 69.6 | 75.2 | 79.0 | 4.90 | 17.8 | 55.5 |
注:加粗数值为表现最佳指标。 |
Table 4 Detected results of objects in different environments表4 不同环境、大小目标检测结果 |
情形 | 真实目标 | 基准算法 | 本文算法 | ||
---|---|---|---|---|---|
小目标 | ![]() | ![]() | ![]() | ||
部分缺 失目标 | ![]() | ![]() | ![]() | ||
黑暗环境 下目标 | ![]() | ![]() | ![]() | ||
![]() | ![]() | ![]() |
注:红色圆圈为漏检,黄色圆圈为误检。 |
[1] |
李成, 李建勋, 童中翔, 等. 红外成像制导末端局部图像识别跟踪研究[J]. 兵工学报, 2015, 36(7):1213-1221.
针对红外成像制导末端目标图像充满导引头视场影响目标识别跟踪的问题,提出一种红外成像制导末端局部图像识别跟踪的方法。分析了红外成像制导原理;选取了高亮区比例、灰度标准偏差、长宽比、紧凑度和复杂度等5个特征量作为特征提取和目标识别的依据,提出适合导弹的目标快速识别算法;通过计算红外成像制导末端目标图像,在导引头焦平面上的投影面积的变化情况,分析了弹目距离与相对速度对目标图像变化情况的影响,研究了形心跟踪到局部图像跟踪的转换时机。综合考虑了可靠性和实时性要求,选取飞机机头作为局部图形跟踪的跟踪点;搭建红外成像制导仿真场景,对所提出的方法进行仿真分析。仿真结果表明:该方法能够有效地识别图像中的目标,减小红外成像导引头跟踪盲区,实现红外成像制导末端的平稳跟踪。
A new method for partial image recognition and tracking in the infrared imaging terminal guidance is proposed for the problem of that the field-of-view of infrared seeker filled with target affects target recognition and tracking. The principle of infrared imaging guidance is analyzed. Five characteristic quantities, such as highlight region proportion, standard deviation of gray value, length-width ratio, compactness and complexity, are used as the criteria of feature extraction and target recognition. A fast target recognition algorithm suitable for missile is proposed. The change of target image projection area on seeker’s focal plane in the infrared imaging terminal guidance is calculated. The effects of missile-target distance and relative velocity on aircraft imaging are analyzed. The switching from centroid tracking to partial image tracking is studied. Considering reliability and real-time, the nose cone of aircraft is selected as the tracking point for partial image tracking. The simulation scene of infrared imaging guidance is built. The simulation results are analyzed. Experimental results show that the proposed method can effectively recognize the targets in the infrared images, decrease the blind area, and realize the steadily tracking in the infrared imaging terminal guidance.
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[2] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[3] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[4] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[5] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[6] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[7] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[8] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[9] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[10] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[11] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[12] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[13] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[14] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[15] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[16] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[17] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[18] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[19] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[20] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
[21] |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
{{custom_ref.label}} |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
/
〈 |
|
〉 |