一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機號碼
標(biāo)題
留言內(nèi)容
驗證碼

基于生成對抗網(wǎng)絡(luò)和線上難例挖掘的SAR圖像艦船目標(biāo)檢測

李健偉 曲長文 彭書娟 江源

李健偉, 曲長文, 彭書娟, 江源. 基于生成對抗網(wǎng)絡(luò)和線上難例挖掘的SAR圖像艦船目標(biāo)檢測[J]. 電子與信息學(xué)報, 2019, 41(1): 143-149. doi: 10.11999/JEIT180050
引用本文: 李健偉, 曲長文, 彭書娟, 江源. 基于生成對抗網(wǎng)絡(luò)和線上難例挖掘的SAR圖像艦船目標(biāo)檢測[J]. 電子與信息學(xué)報, 2019, 41(1): 143-149. doi: 10.11999/JEIT180050
Jianwei LI, Changwen QU, Shujuan PENG, Yuan JIANG. Ship Detection in SAR images Based on Generative Adversarial Network and Online Hard Examples Mining[J]. Journal of Electronics & Information Technology, 2019, 41(1): 143-149. doi: 10.11999/JEIT180050
Citation: Jianwei LI, Changwen QU, Shujuan PENG, Yuan JIANG. Ship Detection in SAR images Based on Generative Adversarial Network and Online Hard Examples Mining[J]. Journal of Electronics & Information Technology, 2019, 41(1): 143-149. doi: 10.11999/JEIT180050

基于生成對抗網(wǎng)絡(luò)和線上難例挖掘的SAR圖像艦船目標(biāo)檢測

doi: 10.11999/JEIT180050 cstr: 32379.14.JEIT180050
基金項目: 國家自然科學(xué)基金(61571454)
詳細信息
    作者簡介:

    李健偉:男,1989年生,博士生,研究方向為SAR圖像處理、機器學(xué)習(xí)及深度學(xué)習(xí)

    曲長文:男,1964年生,教授,研究方向為雷達信號處理,信息對抗,信號與信息處理等

    彭書娟:女,1980年生,博士生,研究方向為SAR圖像處理

    通訊作者:

    李健偉 lgm_jw@163.com

  • 中圖分類號: TN957.51

Ship Detection in SAR images Based on Generative Adversarial Network and Online Hard Examples Mining

Funds: The National Natural Science Foundation of China (61571454)
  • 摘要:

    基于深度學(xué)習(xí)的SAR圖像艦船目標(biāo)檢測算法對圖像的數(shù)量和質(zhì)量有很高的要求,而收集大體量的艦船SAR圖像并制作相應(yīng)的標(biāo)簽需要消耗大量的人力物力和財力。該文在現(xiàn)有SAR圖像艦船目標(biāo)檢測數(shù)據(jù)集(SSDD)的基礎(chǔ)上,針對目前檢測算法對數(shù)據(jù)集利用不充分的問題,提出基于生成對抗網(wǎng)絡(luò)(GAN)和線上難例挖掘(OHEM)的SAR圖像艦船目標(biāo)檢測方法。利用空間變換網(wǎng)絡(luò)在特征圖上進行變換,生成不同尺寸和旋轉(zhuǎn)角度的艦船樣本的特征圖,從而提高檢測器對不同尺寸、旋轉(zhuǎn)角度的艦船目標(biāo)的適應(yīng)性。利用OHEM在后向傳播過程中發(fā)掘并充分利用難例樣本,去掉檢測算法中對樣本正負比例的限制,提高對樣本的利用率。通過在SSDD數(shù)據(jù)集上的實驗證明以上兩點改進對檢測算法性能分別提升了1.3%和1.0%,二者結(jié)合提高了2.1%。以上兩種方法不依賴于具體的檢測算法,且只在訓(xùn)練時增加步驟,在測試時候不增加計算量,具有很強的通用性和實用性。

  • 圖  1  Fast R-CNN原理示意圖

    圖  2  對抗空間變換網(wǎng)絡(luò)示意圖

    圖  3  線上難樣本挖掘流程(K=64)

    圖  4  數(shù)據(jù)集SSDD中不同尺度和角度艦船檢測結(jié)果

    表  1  4種方法檢測性能

    方法mAP
    (%)
    訓(xùn)練時間
    (s)
    測試時間
    (s)
    標(biāo)準(zhǔn)的 Fast R-CNN68.00.6100.328
    標(biāo)準(zhǔn)的 Fast R-CNN+ GAN69.40.8230.326
    標(biāo)準(zhǔn)的 Fast R-CNN+OHEM69.11.1520.321
    標(biāo)準(zhǔn)的 Fast R-CNN+GAN
    +OHEM
    70.22.1090.330
    下載: 導(dǎo)出CSV
  • KRIZHEVSKY A, SUTSKEVER I and HINTON G E. ImageNet classification with deep convolutional neural networks[C]. International Conference on Neural Information Processing Systems. Nevada, USA, 2012: 1097–1105.
    GIRSHICK, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]. IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 580–587.
    REN Shaoqing, HE Kaiming, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2015, 39(6): 1137–1149. doi: 10.1109/TPAMI.2016.2577031
    REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection[C]. IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 779–788.
    LIU Wei, ANGUELOV D, ERHAN D, et al. SSD: Single shot multiBox detector[C]. IEEE European Conference on Computer Vision, Amsterdam, Netherlands, 2016: 21–37.
    王思雨, 高鑫, 孫皓, 等. 基于卷積神經(jīng)網(wǎng)絡(luò)的高分辨率SAR圖像飛機目標(biāo)檢測方法[J]. 雷達學(xué)報, 2017, 6(2): 195–203. doi: 10.12000/JR17009

    Wang Siyu, Gao Xin, Sun Hao, et al. An aircraft detection method based on convolutional neural networks in high-resolution SAR images[J]. Journal of Radars, 2017, 6(2): 195–203. doi: 10.12000/JR17009
    徐豐, 王海鵬, 金亞秋. 深度學(xué)習(xí)在SAR目標(biāo)識別與地物分類中的應(yīng)用[J]. 雷達學(xué)報, 2017, 6(2): 136–148. doi: 10.12000/JR16130

    Xu Feng, Wang Haipeng, and Jin Yaqiu. Deep learning as applied in SAR target recognition and terrain classification[J]. Journal of Radars, 2017, 6(2): 136–148. doi: 10.12000/JR16130
    劉澤宇, 柳彬, 郭煒煒, 等. 高分三號NSC模式SAR圖像艦船目標(biāo)檢測初探[J]. 雷達學(xué)報, 2017, 6(5): 473–482. doi: 10.12000/JR17059

    Liu Zeyu, Liu Bin, Guo Weiwei et al. Ship detection in GF-3 NSC mode SAR images[J]. Journal of Radars, 2017, 6(5): 473–482. doi: 10.12000/JR17059
    HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778.
    HUANG Gao, LIU Zhuang, WEINBERGER K Q, et al. Densely connected convolutional networks[C]. IEEE Conference on Computer Vision and Pattern Recognition, Hawaii, USA, 2017: 4700–4708.
    SUNG K K and POGGIO T. Example-based learning for view-based human face detection[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2002, 20(1): 39–51. doi: 10.1109/34.655648
    SHRIVASTAVA A, GUPTA A, and GIRSHICK R. Training region-based object detectors with online hard example mining[C]. IEEE Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 761–769.
    UIJLINGS J R R, SANDE K, GEVES T, et al. Selective search for object recognition[J]. International Journal of Computer Vision, 2013, 104(2): 154–171. doi: 10.1007/s11263-013-0620-5
    WANG Xiaolong, SHRIVASTAVA A, and GUPTA A. A-Fast-RCNN: Hard positive generation via adversary for object detection[C]. IEEE Conference on Computer Vision and Pattern Recognition, Hawaii, USA, 2017.
    JADERBERG M, KAREN S, and ANDREW Z. Spatial transformer networks[C]. Advances in Neural Information Processing Systems, Montreal, Canada, 2015: 2017–2025.
    GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]. International Conference on Neural Information Processing Systems, Montreal, Canada, 2014: 2672–2680.
  • 加載中
圖(4) / 表(1)
計量
  • 文章訪問數(shù):  2592
  • HTML全文瀏覽量:  1084
  • PDF下載量:  183
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2018-01-15
  • 修回日期:  2018-09-26
  • 網(wǎng)絡(luò)出版日期:  2018-10-22
  • 刊出日期:  2019-01-01

目錄

    /

    返回文章
    返回