一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機號碼
標題
留言內(nèi)容
驗證碼

上下文信息融合與分支交互的SAR圖像艦船無錨框檢測

曲海成 高健康 劉萬軍 王曉娜

曲海成, 高健康, 劉萬軍, 王曉娜. 上下文信息融合與分支交互的SAR圖像艦船無錨框檢測[J]. 電子與信息學報, 2022, 44(1): 380-389. doi: 10.11999/JEIT201059
引用本文: 曲海成, 高健康, 劉萬軍, 王曉娜. 上下文信息融合與分支交互的SAR圖像艦船無錨框檢測[J]. 電子與信息學報, 2022, 44(1): 380-389. doi: 10.11999/JEIT201059
QU Haicheng, GAO Jiankang, LIU Wanjun, WANG Xiaona. An Anchor-free Method Based on Context Information Fusion and Interacting Branch for Ship Detection in SAR Images[J]. Journal of Electronics & Information Technology, 2022, 44(1): 380-389. doi: 10.11999/JEIT201059
Citation: QU Haicheng, GAO Jiankang, LIU Wanjun, WANG Xiaona. An Anchor-free Method Based on Context Information Fusion and Interacting Branch for Ship Detection in SAR Images[J]. Journal of Electronics & Information Technology, 2022, 44(1): 380-389. doi: 10.11999/JEIT201059

上下文信息融合與分支交互的SAR圖像艦船無錨框檢測

doi: 10.11999/JEIT201059 cstr: 32379.14.JEIT201059
基金項目: 國家自然科學基金青年基金(41701479),遼寧省教育廳基金(LJ2019JL010),遼寧工程技術(shù)大學學科創(chuàng)新團隊(LNTU20TD-23)
詳細信息
    作者簡介:

    曲海成:男,1981年生,副教授,研究方向為遙感影像高性能計算、視覺信息計算、目標檢測與識別

    高健康:男,1996年生,碩士生,研究方向為遙感圖像目標檢測

    劉萬軍:男,1959年生,教授,研究方向為數(shù)字圖像處理、運動目標檢測與跟蹤

    王曉娜:女,1994年生,碩士生,研究方向為數(shù)字圖像處理

    通訊作者:

    高健康 gjk_0825@163.com

  • 1) SSDD數(shù)據(jù)集下載鏈接:https://zhuanlan.zhihu.com/p/1437944682) SAR-Ship-Dataset數(shù)據(jù)集下載:https://pan.baidu.com/s/1PhSMkXVcuRM8M8xL15iBIQ
  • 中圖分類號: TN911.73; TP751

An Anchor-free Method Based on Context Information Fusion and Interacting Branch for Ship Detection in SAR Images

Funds: The Young Scientists Fund of National Natural Science Foundation of China (41701479), The Department of Education Fund Item (LJ2019JL010) of Liaoning Province, The Discipline Innovation Team of Liaoning Technical University (LNTU20TD-23)
  • 摘要: SAR圖像中艦船目標稀疏分布、錨框的設(shè)計,對現(xiàn)有基于錨框的SAR圖像目標檢測方法的精度和泛化性有較大影響,因此該文提出一種上下文信息融合與分支交互的SAR圖像艦船目標無錨框檢測方法,命名為CI-Net??紤]到SAR圖中艦船尺度的多樣性,在特征提取階段設(shè)計上下文融合模塊,以自底向上的方式融合高低層信息,結(jié)合目標上下文信息,細化提取到的待檢測特征;其次,針對復(fù)雜場景中目標定位準確性不足的問題,提出分支交互模塊,在檢測階段利用分類分支優(yōu)化回歸分支的檢測框,改善目標定位框的精準性,同時將新增的IOU分支作用于分類分支,提高檢測網(wǎng)絡(luò)分類置信度,抑制低質(zhì)量的檢測框。實驗結(jié)果表明:在公開的SSDD和SAR-Ship-Dataset數(shù)據(jù)集上,該文方法均取得了較好的檢測效果,平均精度(AP)分別達到92.56%和88.32%,與其他SAR圖艦船檢測方法相比,該文方法不僅在精度上表現(xiàn)優(yōu)異,在摒棄了與錨框有關(guān)的復(fù)雜計算后,較快的檢測速度,對SAR圖像實時目標檢測也有一定的現(xiàn)實意義。
  • 圖  1  無錨框的檢測模型

    圖  2  CI-Net檢測模型框架

    圖  3  上下文融合模塊

    圖  4  GCNet結(jié)構(gòu)

    圖  5  自注意力模塊

    圖  6  檢測結(jié)果對比圖

    圖  7  上下文融合模塊特征可視化

    圖  8  不同方法的P-R曲線圖

    表  1  艦船數(shù)據(jù)集的基本信息

    數(shù)據(jù)集傳感器來源空間分辨率(m)極化方式輸入圖像大小場景
    SSDDRadarSat-2, TerraSAR-X, Sentinel-11~15VV, HH, VH, HV500×500近海、近岸區(qū)域
    SAR-Ship DatasetGF-3, Sentinel-13, 5, 8, 10等VV, HH, VH, HV256×256遠海區(qū)域
    下載: 導出CSV

    表  2  模型實驗結(jié)果

    方法上下文融合(CF)分支交互(IB)召回率(%)準確率(%)平均精度(%)F1(%)fps
    FCOS[14]××88.6488.4486.2788.5423
    本文×92.2386.6090.6989.3229
    FCOS[14]×90.3193.4188.4291.8322
    本文94.2792.0492.5693.1428
    注:“×”表示沒有采用該模塊?!啊獭北硎静捎迷撃K。加粗值為每列最優(yōu)結(jié)果。
    下載: 導出CSV

    表  3  不同方法在SSDD數(shù)據(jù)集上檢測性能對比

    方法單階段無錨框召回率(%)準確率(%)平均精度(%)F1(%)fps
    Faster R-CNN××85.3984.1883.0784.7811
    RetinaNet×89.4090.4387.9489.9116
    DCMSNN××91.5988.3389.3489.938
    本文CI-Net94.2792.0492.5693.1428
    下載: 導出CSV

    表  4  不同方法在SAR-Ship-Dataset上檢測性能對比

    方法單階段無錨框召回率(%)準確率(%)平均精度(%)F1(%)fps
    Faster R-CNN××84.3084.4781.7784.3913
    RetinaNet×84.6085.8382.0285.2121
    DCMSNN××86.6488.0784.3687.359
    本文CI-Net90.2888.1488.3289.2034
    下載: 導出CSV
  • [1] 楊國錚, 禹晶, 肖創(chuàng)柏, 等. 基于形態(tài)字典學習的復(fù)雜背景SAR圖像艦船尾跡檢測[J]. 自動化學報, 2017, 43(10): 1713–1725. doi: 10.16383/j.aas.2017.c160274

    YANG Guozheng, YU Jing, XIAO Chuangbai, et al. Ship Wake detection in SAR images with complex background using morphological dictionary learning[J]. Acta Automatica Sinica, 2017, 43(10): 1713–1725. doi: 10.16383/j.aas.2017.c160274
    [2] 李健偉, 曲長文, 彭書娟, 等. 基于生成對抗網(wǎng)絡(luò)和線上難例挖掘的SAR圖像艦船目標檢測[J]. 電子與信息學報, 2019, 41(1): 143–149. doi: 10.11999/JEIT180050

    LI Jianwei, QU Changwen, PENG Shujuan, et al. Ship detection in SAR images based on generative adversarial network and online hard examples mining[J]. Journal of Electronics &Information Technology, 2019, 41(1): 143–149. doi: 10.11999/JEIT180050
    [3] HOU Biao, CHEN Xingzhong, and JIAO Licheng. Multilayer CFAR detection of ship targets in very high resolution SAR images[J]. IEEE Geoscience and Remote Sensing Letters, 2015, 12(4): 811–815. doi: 10.1109/LGRS.2014.2362955
    [4] LI Jianwei, QU Changwen, and SHAO Jiaqi. Ship detection in SAR images based on an improved faster R-CNN[C]. 2017 SAR in Big Data Era: Models, Methods and Applications, Beijing, China, 2017: 1–6. doi: 10.1109/BIGSARDATA.2017.8124934.
    [5] REN Shaoqing, HE Kaiming, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137–1149. doi: 10.1109/TPAMI.2016.2577031
    [6] JIAO Jiao, ZHANG Yue, SUN Hao, et al. A densely connected end-to-end neural network for multiscale and multiscene SAR ship detection[J]. IEEE Access, 2018, 6: 20881–20892. doi: 10.1109/ACCESS.2018.2825376
    [7] 胡昌華, 陳辰, 何川, 等. 基于深度卷積神經(jīng)網(wǎng)絡(luò)的SAR圖像艦船小目標檢測[J]. 中國慣性技術(shù)學報, 2019, 27(3): 397–405, 414. doi: 10.13695/j.cnki.12-1222/o3.2019.03.018

    HU Changhua, CHEN Chen, HE Chuan, et al. SAR detection for small target ship based on deep convolutional neural network[J]. Journal of Chinese Inertial Technology, 2019, 27(3): 397–405, 414. doi: 10.13695/j.cnki.12-1222/o3.2019.03.018
    [8] LIN T Y, DOLLáR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, 2017: 936–944. doi: 10.1109/CVPR.2017.106.
    [9] CUI Zongyong, LI Qi, CAO Zongjie, et al. Dense attention pyramid networks for multi-scale ship detection in SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(11): 8983–8997. doi: 10.1109/TGRS.2019.2923988
    [10] LIU Wei, ANGUELOV D, ERHAN D, et al. SSD: Single shot multibox detector[C]. The 14th European Conference on Computer Vision, Amsterdam, Netherlands, 2016: 21–37. doi: 10.1007/978-3-319-46448-0_2.
    [11] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 779–788. doi: 10.1109/CVPR.2016.91.
    [12] SHRIVASTAVA A, GUPTA A, and GIRSHICK R. Training region-based object detectors with online hard example mining[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 761–769. doi: 10.1109/CVPR.2016.89.
    [13] DUAN Kaiwen, BAI Song, XIE Lingxi, et al. CenterNet: Keypoint triplets for object detection[C]. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, South Korea, 2019: 6568–6577. doi: 10.1109/ICCV.2019.00667.
    [14] TIAN Zhi, SHEN Chunhua, CHEN Hao, et al. FCOS: Fully convolutional one-stage object detection[C]. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, South Korea, 2019: 9626–9635. doi: 10.1109/ICCV.2019.00972.
    [15] PANG Jiangmiao, CHEN Kai, SHI Jianping, et al. Libra R-CNN: Towards balanced learning for object detection[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 2019: 821–830. doi: 10.1109/CVPR.2019.00091.
    [16] CAO Yue, XU Jiarui, LIN S, et al. GCNet: Non-local networks meet squeeze-excitation networks and beyond[C]. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, South Korea,2019: 1971–1980. doi: 10.1109/ICCVW.2019.00246.
    [17] WANG Xiaolong, GIRSHICK R, GUPTA A, et al. Non-local neural networks[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 7794–7803. doi: 10.1109/CVPR.2018.00813.
    [18] HU Jie, SHEN Li, and SUN Gang. Squeeze-and-excitation networks[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 7132–7141. doi: 10.1109/CVPR.2018.00745.
    [19] LI Huan and TANG Jinglei. Dairy goat image generation based on improved-self-attention generative adversarial networks[J]. IEEE Access, 2020, 8: 62448–62457. doi: 10.1109/ACCESS.2020.2981496
    [20] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(2): 318–327. doi: 10.1109/TPAMI.2018.2858826
    [21] WANG Yuanyuan, WANG Chao, ZHANG Hong, et al. A SAR dataset of ship detection for deep learning under complex backgrounds[J]. Remote Sensing, 2019, 11(7): 765. doi: 10.3390/rs11070765
    [22] HUANG Lanqing, LIU Bin, LI Boying, et al. OpenSARShip: A dataset dedicated to Sentinel-1 ship interpretation[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(1): 195–208. doi: 10.1109/JSTARS.2017.2755672
    [23] KANG Miao, JI Kefeng, LENG Xiangguang, et al. Contextual region-based convolutional neural network with multilayer fusion for SAR ship detection[J]. Remote Sensing, 2017, 9(8): 860. doi: 10.3390/rs9080860
  • 加載中
圖(8) / 表(4)
計量
  • 文章訪問數(shù):  1100
  • HTML全文瀏覽量:  488
  • PDF下載量:  101
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2020-12-16
  • 修回日期:  2021-05-27
  • 網(wǎng)絡(luò)出版日期:  2021-08-27
  • 刊出日期:  2022-01-10

目錄

    /

    返回文章
    返回