一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號(hào)碼
標(biāo)題
留言內(nèi)容
驗(yàn)證碼

基于半監(jiān)督學(xué)習(xí)的SAR目標(biāo)檢測網(wǎng)絡(luò)

杜蘭 魏迪 李璐 郭昱辰

杜蘭, 魏迪, 李璐, 郭昱辰. 基于半監(jiān)督學(xué)習(xí)的SAR目標(biāo)檢測網(wǎng)絡(luò)[J]. 電子與信息學(xué)報(bào), 2020, 42(1): 154-163. doi: 10.11999/JEIT190783
引用本文: 杜蘭, 魏迪, 李璐, 郭昱辰. 基于半監(jiān)督學(xué)習(xí)的SAR目標(biāo)檢測網(wǎng)絡(luò)[J]. 電子與信息學(xué)報(bào), 2020, 42(1): 154-163. doi: 10.11999/JEIT190783
Lan DU, Di WEI, Lu LI, Yuchen GUO. SAR Target Detection Network via Semi-supervised Learning[J]. Journal of Electronics & Information Technology, 2020, 42(1): 154-163. doi: 10.11999/JEIT190783
Citation: Lan DU, Di WEI, Lu LI, Yuchen GUO. SAR Target Detection Network via Semi-supervised Learning[J]. Journal of Electronics & Information Technology, 2020, 42(1): 154-163. doi: 10.11999/JEIT190783

基于半監(jiān)督學(xué)習(xí)的SAR目標(biāo)檢測網(wǎng)絡(luò)

doi: 10.11999/JEIT190783 cstr: 32379.14.JEIT190783
基金項(xiàng)目: 國家自然科學(xué)基金(61771362, U1833203, 61671354),高等學(xué)校學(xué)科創(chuàng)新引智計(jì)劃(B18039),陜西省重點(diǎn)科技創(chuàng)新團(tuán)隊(duì)計(jì)劃
詳細(xì)信息
    作者簡介:

    杜蘭:女,1980年生,教授,博士生導(dǎo)師,研究方向?yàn)榻y(tǒng)計(jì)信號(hào)處理、雷達(dá)信號(hào)處理、機(jī)器學(xué)習(xí)及其在雷達(dá)目標(biāo)檢測與識(shí)別方面的應(yīng)用

    魏迪:男,1995年生,碩士生,研究方向?yàn)槔走_(dá)目標(biāo)檢測與識(shí)別,機(jī)器學(xué)習(xí)等

    李璐:女,1992年生,博士生,研究方向?yàn)闄C(jī)器學(xué)習(xí)和目標(biāo)檢測識(shí)別等

    郭昱辰:男,1992年生,博士生,研究方向?yàn)槔走_(dá)目標(biāo)檢測與識(shí)別

    通訊作者:

    杜蘭 dulan@mail.xidian.edu.cn

  • 中圖分類號(hào): TN957.51

SAR Target Detection Network via Semi-supervised Learning

Funds: The National Science Foundation of China (61771362, U1833203, 61671354), 111 Project (B18039), Shaanxi Provience Innovation Team Project
  • 摘要:

    現(xiàn)有的基于卷積神經(jīng)網(wǎng)絡(luò)(CNN)的合成孔徑雷達(dá)(SAR)圖像目標(biāo)檢測算法依賴于大量切片級標(biāo)記的樣本,然而對SAR圖像進(jìn)行切片級標(biāo)記需要耗費(fèi)大量的人力和物力。相對于切片級標(biāo)記,僅標(biāo)記圖像中是否含有目標(biāo)的圖像級標(biāo)記較為容易。該文利用少量切片級標(biāo)記的樣本和大量圖像級標(biāo)記的樣本,提出一種基于卷積神經(jīng)網(wǎng)絡(luò)的半監(jiān)督SAR圖像目標(biāo)檢測方法。該方法的目標(biāo)檢測網(wǎng)絡(luò)由候選區(qū)域提取網(wǎng)絡(luò)和檢測網(wǎng)絡(luò)組成。半監(jiān)督訓(xùn)練過程中,首先使用切片級標(biāo)記的樣本訓(xùn)練目標(biāo)檢測網(wǎng)絡(luò),訓(xùn)練收斂后輸出的候選切片構(gòu)成候選區(qū)域集;然后將圖像級標(biāo)記的雜波樣本輸入網(wǎng)絡(luò),將輸出的負(fù)切片加入候選區(qū)域集;接著將圖像級標(biāo)記的目標(biāo)樣本也輸入網(wǎng)絡(luò),對輸出結(jié)果中的正負(fù)切片進(jìn)行挑選并加入候選區(qū)域集;最后使用更新后的候選區(qū)域集訓(xùn)練檢測網(wǎng)絡(luò)。更新候選區(qū)域集和訓(xùn)練檢測網(wǎng)絡(luò)交替迭代直至收斂?;趯?shí)測數(shù)據(jù)的實(shí)驗(yàn)結(jié)果證明,所提方法的性能與使用全部樣本進(jìn)行切片級標(biāo)記的全監(jiān)督方法的性能相差不大。

  • 圖  1  半監(jiān)督SAR圖像目標(biāo)檢測方法

    圖  2  特征提取網(wǎng)絡(luò)

    圖  3  數(shù)據(jù)集示例

    圖  4  MiniSAR數(shù)據(jù)集:Gaussian-CFAR的檢測結(jié)果

    圖  5  MiniSAR數(shù)據(jù)集:Faster R-CNN-少部分切片級標(biāo)記的檢測結(jié)果

    圖  6  MiniSAR數(shù)據(jù)集:Faster R-CNN-全部切片級標(biāo)記的檢測結(jié)果

    圖  7  MiniSAR數(shù)據(jù)集:文獻(xiàn)[14]方法的檢測結(jié)果

    圖  8  MiniSAR數(shù)據(jù)集:文獻(xiàn)[15]方法的檢測結(jié)果

    圖  9  MiniSAR數(shù)據(jù)集:本文方法的檢測結(jié)果

    圖  10  FARADSAR數(shù)據(jù)集:Gaussian-CFAR的檢測結(jié)果

    圖  11  FARADSAR數(shù)據(jù)集:Faster R-CNN-少部分切片級標(biāo)記的檢測結(jié)果

    圖  12  FARADSAR數(shù)據(jù)集:Faster R-CNN-全部切片級標(biāo)記的檢測結(jié)果

    圖  13  FARADSAR數(shù)據(jù)集:文獻(xiàn)[14]方法的檢測結(jié)果

    圖  14  FARADSAR數(shù)據(jù)集:文獻(xiàn)[15]方法的檢測結(jié)果

    圖  15  FARADSAR數(shù)據(jù)集:本文方法的檢測結(jié)果

    表  1  不同方案的實(shí)驗(yàn)結(jié)果

    負(fù)包數(shù)量挑選的切片$P$$R$F1-score
    0正切片0.63970.75000.6905
    負(fù)切片0.88330.45690.6023
    正切片+負(fù)切片0.73870.70690.7225
    10正切片0.67970.75000.7131
    負(fù)切片0.79170.49140.6064
    正切片+負(fù)切片0.75730.67240.7123
    20正切片0.76580.73280.7489
    負(fù)切片0.83820.49140.6196
    正切片+負(fù)切片0.81370.71550.7615
    30正切片0.82020.62930.7122
    負(fù)切片0.84130.45690.5922
    正切片+負(fù)切片0.86750.62070.7236
    40正切片0.81110.62930.7087
    負(fù)切片0.86670.44830.5909
    正切片+負(fù)切片0.83520.65520.7343
    下載: 導(dǎo)出CSV

    表  2  不同方法的實(shí)驗(yàn)結(jié)果

    不同方法MiniSAR數(shù)據(jù)集FARADSAR數(shù)據(jù)集
    $P$$R$F1-score$P$$R$F1-score
    Gaussian-CFAR0.37890.79660.51350.28130.46710.3512
    Faster R-CNN-少部分切片級標(biāo)記0.64550.61210.62830.73700.88130.8027
    Faster R-CNN-全部切片級標(biāo)記0.80730.75860.78220.77600.94790.8534
    文獻(xiàn)[14]方法0.58140.98060.72850.45060.73250.5580
    文獻(xiàn)[15]方法0.46990.74800.57720.37440.79450.5090
    本文方法0.81370.71550.76150.80350.88130.8406
    下載: 導(dǎo)出CSV
  • NOVAK L M, BURL M C, and IRVING W W. Optimal polarimetric processing for enhanced target detection[J]. IEEE Transactions on Aerospace and Electronic Systems, 1993, 29(1): 234–244. doi: 10.1109/7.249129
    XING X W, CHEN Z L, ZOU H X, et al. A fast algorithm based on two-stage CFAR for detecting ships in SAR images[C]. The 2nd Asian-Pacific Conference on Synthetic Aperture Radar, Xi’an, China, 2009: 506–509. doi: 10.1109/APSAR.2009.5374119.
    LENG Xiangguang, JI Kefeng, YANG Kai, et al. A bilateral CFAR algorithm for ship detection in SAR images[J]. IEEE Geoscience and Remote Sensing Letters, 2015, 12(7): 1536–1540. doi: 10.1109/LGRS.2015.2412174
    LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278–2324. doi: 10.1109/5.726791
    HINTON G E and SALAKHUTDINOV R R. Reducing the dimensionality of data with neural networks[J]. Science, 2006, 313(5786): 504–507. doi: 10.1126/science.1127647
    KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[C]. The 25th International Conference on Neural Information Processing Systems, Lake Tahoe, USA, 2012: 1097–1105.
    SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv: 1409.1556, 2014.
    SZEGEDY C, LIU Wei, JIA Yangqing, et al. Going deeper with convolutions[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 1–9. doi: 10.1109/CVPR.2015.7298594.
    HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90.
    GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 580–587. doi: 10.1109/CVPR.2014.81.
    GIRSHICK R. Fast R-CNN[C]. 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 1440–1448. doi: 10.1109/ICCV.2015.169.
    REN Shaoqing, HE Kaiming, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[C]. The 28th International Conference on Neural Information Processing Systems, Montréal, Canada, 2015: 91–99.
    杜蘭, 劉彬, 王燕, 等. 基于卷積神經(jīng)網(wǎng)絡(luò)的SAR圖像目標(biāo)檢測算法[J]. 電子與信息學(xué)報(bào), 2016, 38(12): 3018–3025. doi: 10.11999/JEIT161032

    DU Lan, LIU Bin, WANG Yan, et al. Target detection method based on convolutional neural network for SAR image[J]. Journal of Electronics &Information Technology, 2016, 38(12): 3018–3025. doi: 10.11999/JEIT161032
    ROSENBERG C, HEBERT M, and SCHNEIDERMAN H. Semi-supervised self-training of object detection models[C]. The 7th IEEE Workshops on Applications of Computer Vision, Breckenridge, USA, 2005: 29–36. doi: 10.1109/ACVMOT.2005.107.
    ZHANG Fan, DU Bo, ZHANG Liangpei, et al. Weakly supervised learning based on coupled convolutional neural networks for aircraft detection[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(9): 5553–5563. doi: 10.1109/TGRS.2016.2569141
    IOFFE S and SZEGEDY C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[C]. The 32nd International Conference on Machine Learning, Lille, France, 2015: 448–456.
    GLOROT X, BORDES A, and BENGIO Y. Deep sparse rectifier neural networks[C]. The 14th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, USA, 2011: 315–323.
    GUTIERREZ D. MiniSAR: A review of 4-inch and 1-foot resolution Ku-band imagery[EB/OL]. https://www.sandia.gov/radar/Web/images/SAND2005-3706P-miniSAR-flight-SAR-images.pdf, 2005.
    FARADSAR public Release Data[EB/OL]. https://www.sandia.gov/radar/complex_data/FARAD_KA_BAND.zip.
  • 加載中
圖(15) / 表(2)
計(jì)量
  • 文章訪問數(shù):  5081
  • HTML全文瀏覽量:  2385
  • PDF下載量:  371
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2019-10-12
  • 修回日期:  2019-12-05
  • 網(wǎng)絡(luò)出版日期:  2019-12-09
  • 刊出日期:  2020-01-21

目錄

    /

    返回文章
    返回