卷積神經(jīng)網(wǎng)絡(luò)在雷達(dá)自動(dòng)目標(biāo)識(shí)別中的研究進(jìn)展
doi: 10.11999/JEIT180899 cstr: 32379.14.JEIT180899
-
1.
西北工業(yè)大學(xué)自動(dòng)化學(xué)院? ?西安? ?710129
-
2.
航空工業(yè)雷華電子技術(shù)研究所射頻綜合仿真實(shí)驗(yàn)室? ?無錫? ?214063
-
3.
海軍航空大學(xué)信息融合研究所? ?煙臺(tái)? ?264001
Research and Development on Applications of Convolutional Neural Networks of Radar Automatic Target Recognition
-
1.
Institute of Automation, Northwestern Polytechnical University, Xi’an 710129, China
-
2.
Aviation Key Laboratory of Science and Technology on AISSS, AVIC Leihua Electronic Technology Research Institute, Wuxi 214063, China
-
3.
Research Institute of Information Fusion, Naval Aeronautical University, Yantai 264001, China
-
摘要:
自動(dòng)目標(biāo)識(shí)別(ATR)是雷達(dá)信息處理領(lǐng)域的重要研究方向。由于卷積神經(jīng)網(wǎng)絡(luò)(CNN)無需進(jìn)行特征工程,圖像分類性能優(yōu)越,因此在雷達(dá)自動(dòng)目標(biāo)識(shí)別領(lǐng)域研究中受到越來越多的關(guān)注。該文綜合論述了CNN在雷達(dá)圖像處理中的應(yīng)用進(jìn)展。首先介紹了雷達(dá)自動(dòng)目標(biāo)識(shí)別相關(guān)知識(shí),包括雷達(dá)圖像的特性,并指出了傳統(tǒng)的雷達(dá)自動(dòng)目標(biāo)識(shí)別方法局限性。給出了CNN卷積神經(jīng)網(wǎng)絡(luò)原理、組成和在計(jì)算機(jī)視覺領(lǐng)域的發(fā)展歷程。然后著重介紹了CNN在雷達(dá)自動(dòng)目標(biāo)識(shí)別中的研究現(xiàn)狀,其中詳細(xì)介紹了合成孔徑雷達(dá)(SAR)圖像目標(biāo)的檢測(cè)與識(shí)別方法。接下來對(duì)雷達(dá)自動(dòng)目標(biāo)識(shí)別面臨的挑戰(zhàn)進(jìn)行了深入分析。最后對(duì)CNN新理論、新模型,以及雷達(dá)新成像技術(shù)和未來復(fù)雜環(huán)境下的應(yīng)用進(jìn)行了展望。
-
關(guān)鍵詞:
- 自動(dòng)目標(biāo)識(shí)別 /
- 目標(biāo)檢測(cè) /
- 合成孔徑雷達(dá) /
- 卷積神經(jīng)網(wǎng)絡(luò)
Abstract:Automatic Target Recognition(ATR) is an important research area in the field of radar information processing. Because the deep Convolution Neural Network(CNN) does not need to carry out feature engineering and the performance of image classification is superior, it attracts more and more attention in the field of radar automatic target recognition. The application of CNN to radar image processing is reviewed in this paper. Firstly, the related knowledges including the characteristics of the radar image is introduced, and the limitations of traditional radar automatic target recognition methods are pointed out. The principle, composition, development of CNN and the field of computer vision are introduced. Then, the research status of CNN in radar automatic target recognition is provided. The detection and recognition method of SAR image are presented in detail. The challenge of radar automatic target recognition is analyzed. Finally, the new theory and model of convolution neural network, the new imaging technology of radar and the application to complex environments in the future are prospected.
-
表 1 光學(xué)圖像和雷達(dá)圖像的差異
特性 光學(xué)圖像 雷達(dá)圖像 波段 可見光,紅外 微波段 信號(hào)形式 多波段灰度信息 單波段復(fù)信號(hào) 成像原理 能量聚焦積累 相位相干積累 尺度特性 和成像距離有關(guān) 目標(biāo)尺寸不隨成像距離變化 成像方向 俯仰角-方位角 距離向-方位角 下載: 導(dǎo)出CSV
表 2 部分典型網(wǎng)絡(luò)的參數(shù)總結(jié)
LeNet5 AlexNet Overfeatfast VGG16 GoogleNetV1 ResNet50 輸入圖像尺寸 28×28 227×227 231×231 224×224 224×224 224×224 卷積層數(shù)量 2 5 5 13 57 53 全連接層數(shù)量 2 3 3 3 1 1 卷積核大小 5 3,5,11 3,5,11 3 1,3,5,7 1,3,7 步長(zhǎng) 1 1,4 1,4 1 1,2 1,2 權(quán)值參數(shù)數(shù)量 60 k 61 M 146 M 138 M 7 M 25.5 M 乘積運(yùn)算數(shù)量 341 k 724 M 2.8 G 15.5 G 1.43 G 3.9 G Top-5誤差 – 16.4 14.2 7.4 6.7 5.25 下載: 導(dǎo)出CSV
表 3 MSTAR數(shù)據(jù)集的目標(biāo)類型和樣本數(shù)量
數(shù)據(jù)集 2S1 BMP2 BRD M2 BTR 60 BTR 70 D7 T62 T72 ZIL 131 ZSU 234 訓(xùn)練集 299 233 298 256 233 299 299 298 299 299 測(cè)試集 274 587 274 195 196 274 196 274 274 274 下載: 導(dǎo)出CSV
表 4 常見數(shù)據(jù)增強(qiáng)技術(shù)
名稱 主要方法 旋轉(zhuǎn)變換 將圖像旋轉(zhuǎn)一定角度 翻轉(zhuǎn)變換 沿水平或垂直方向翻轉(zhuǎn)圖像 縮放變換 放大或縮小圖像 平移變換 在圖像平面上對(duì)圖像進(jìn)行平移 尺度變換 對(duì)圖像按照置頂?shù)某叨纫蜃舆M(jìn)行縮放,改變圖像內(nèi)容的大小或模糊程度 反射變換 對(duì)稱變換,包括軸反射變換和鏡面反射變換 噪聲擾動(dòng) 在圖像內(nèi)增加噪聲,如指數(shù)噪聲,高斯噪聲,瑞利噪聲,椒鹽噪聲等 下載: 導(dǎo)出CSV
表 5 基于CNN的目標(biāo)檢測(cè)方法對(duì)比
方法 提出場(chǎng)合 核心思想 MAP(%) 主要特點(diǎn) 候選窗方法 RCNN ECCV 2014 選擇搜索方法生成候選窗 66.0 訓(xùn)練分多個(gè)階段,每個(gè)候選窗都需要用CNN處理,占用磁盤空間大,處理效率低 Fast RCNN ICCV2015 加入了SPPnet 70.0 選擇搜索方法生成候選窗,耗時(shí)長(zhǎng),無法滿足實(shí)時(shí)應(yīng)用 Faster RCNN NIPS2015 提出了RPN網(wǎng)絡(luò),融合區(qū)域生成與CNN 73.2 性能與速度較好的折中,但區(qū)域生成方式計(jì)算量依然很大,不能實(shí)時(shí)處理 R-FCN NIPS2016 RPN+位置敏感的預(yù)測(cè)層+ROI polling+投票決策層 76.6 速度比Faster RCNN快,且精度相當(dāng) 回歸方法 YOLO CVPR2016 將檢測(cè)問題變?yōu)榛貧w問題 57.9 沒有區(qū)域生成步驟,網(wǎng)格回歸的定位性能較弱,檢測(cè)精度不高。 SSD ECCV2016 YOLO+Proposal+多尺度 73.9 速度非??欤阅芤膊诲e(cuò) 下載: 導(dǎo)出CSV
表 6 CNN在雷達(dá)圖像識(shí)別應(yīng)用進(jìn)展的思想與方法概要
提升類型 主要思想 引用文獻(xiàn)和方法概要說明 快速算法 快速尋優(yōu)預(yù)訓(xùn)練 文獻(xiàn)[47]:帶動(dòng)量小批量隨機(jī)梯度下降,快速尋找全局最優(yōu)點(diǎn) 文獻(xiàn)[45]:預(yù)訓(xùn)練較淺卷積網(wǎng)絡(luò),實(shí)現(xiàn)無監(jiān)督快速檢測(cè)。 文獻(xiàn)[53]:用大樣本數(shù)據(jù)對(duì)卷積網(wǎng)絡(luò)進(jìn)行預(yù)訓(xùn)練 用其他結(jié)構(gòu)取代全連接層 文獻(xiàn)[40,47]:低自由度稀疏連通卷積結(jié)構(gòu) 文獻(xiàn)[39]:SVM代替FC 文獻(xiàn)[53]:用超限學(xué)習(xí)機(jī)替換FC 抽取特征再訓(xùn)練 文獻(xiàn)[54]:先抽取特征再訓(xùn)練的兩步快速訓(xùn)練方法 提升算法 提高網(wǎng)絡(luò)的泛化能力 文獻(xiàn)[47]:Dropout和早期停止 文獻(xiàn)[52]:將卷積層與2維PCA方法結(jié)合 代價(jià)函數(shù)改進(jìn) 文獻(xiàn)[46]:代價(jià)函數(shù)中引入類別可分性度量提高類別區(qū)分能力 含噪樣本訓(xùn)練 文獻(xiàn)[49]:基于概率轉(zhuǎn)移模型增強(qiáng)含噪標(biāo)記下分類模型魯棒性。 擴(kuò)展算法 遷移學(xué)習(xí) 文獻(xiàn)[26,53,55]:大樣本預(yù)訓(xùn)練,遷移學(xué)習(xí)加快訓(xùn)練速度 CAD模型仿真 文獻(xiàn)[56]: 采用CAD模型目標(biāo)仿真解決SAR真實(shí)數(shù)據(jù)有限問題 文獻(xiàn)[57]: CAD模型生成不同方位和俯仰角度的HRRP圖像 預(yù)處理提升信息的利用率 文獻(xiàn)[41]:形態(tài)學(xué)成分分析預(yù)處理提升性能 文獻(xiàn)[58]:采用去噪自編碼器預(yù)訓(xùn)練 小樣本深度訓(xùn)練網(wǎng)絡(luò) 文獻(xiàn)[42,44]:卷積高速公路單元在小樣本條件下訓(xùn)練深度網(wǎng)絡(luò) 文獻(xiàn)[59]:無監(jiān)督和有監(jiān)督訓(xùn)練結(jié)合,應(yīng)對(duì)標(biāo)簽數(shù)據(jù)有限情況 下載: 導(dǎo)出CSV
-
KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. Imagenet classification with deep convolutional neural networks[C]. The 25th International Conference on Neural Information Processing Systems, Lake Tahoe, USA, 2012: 1097–1105. CHENG Gong, HAN Junwei, and LU Xiaoqiang. Remote sensing image scene classification: benchmark and state of the art[J]. Proceedings of the IEEE, 2017, 105(10): 1865–1883. doi: 10.1109/JPROC.2017.2675998 陳小龍, 關(guān)鍵, 何友, 等. 高分辨稀疏表示及其在雷達(dá)動(dòng)目標(biāo)檢測(cè)中的應(yīng)用[J]. 雷達(dá)學(xué)報(bào), 2017, 6(3): 239–251. doi: 10.12000/JR16110CHEN Xiaolong, GUAN Jian, HE You, et al. High-resolution sparse representation and its applications in radar moving target detection[J]. Journal of Radars, 2017, 6(3): 239–251. doi: 10.12000/JR16110 BALL J E, ANDERSON D T, and CHAN C S. Comprehensive survey of deep learning in remote sensing: theories, tools, and challenges for the community[J]. Journal of Applied Remote Sensing, 2017, 11(4): 042609. doi: 10.1117/1.JRS.11.042609 PEI Jifang, HUANG Yulin, HUO Weibo, et al. SAR automatic target recognition based on multiview deep learning framework[J]. IEEE Transactions on Geoscience and Remote Sensing, 2018, 56(4): 2196–2210. doi: 10.1109/tgrs.2017.2776357 GOODFELLOW I, BENGIO Y, and COURVILLE A. Deep Learning[M]. Cambridge, Massachusetts: MIT Press, 2016. LECUN Yann, BOTTOU Léon, BENGIO Yoshua, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278–2324. doi: 10.1109/5.726791 RUSSAKOVSKY O, DENG Jia, SU Hao, et al. Imagenet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3): 211–252. doi: 10.1007/s11263-015-0816-y ZEILER M D and FERGUS R. Visualizing and understanding convolutional networks[C]. The 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 818–833. SZEGEDY C, LIU Wei, JIA Yangqing, et al. Going deeper with convolutions[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 1–9. SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. http://arxiv.org/abs/1409.1556, 2014. HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90. HU Jie, SHEN Li, ALBANIE S, et al. Squeeze-and-excitation networks[EB/OL]. https://arxiv.org/abs/1709.01507, 2017. 許強(qiáng), 李偉, LOUMBI P. 深度卷積神經(jīng)網(wǎng)絡(luò)在SAR自動(dòng)目標(biāo)識(shí)別領(lǐng)域的應(yīng)用綜述[J]. 電訊技術(shù), 2018, 58(1): 106–112. doi: 10.3969/j.issn.1001-893x.2018.01.019XU Qiang, LI Wei, and LOUMBI P. Applications of Deep convolutional neural network in SAR automatic target recognition: a summarization[J]. Telecommunication Engineering, 2018, 58(1): 106–112. doi: 10.3969/j.issn.1001-893x.2018.01.019 蘇寧遠(yuǎn), 陳小龍, 關(guān)鍵, 等. 基于卷積神經(jīng)網(wǎng)絡(luò)的海上微動(dòng)目標(biāo)檢測(cè)與分類方法[J]. 雷達(dá)學(xué)報(bào), 2018, 7(5): 565–574. doi: 10.12000/JR18077SU Ningyuan, CHEN Xiaolong, GUAN Jian, et al. Detection and classification of maritime target with micro-motion based on CNNs[J]. Journal of Radars, 2018, 7(5): 565–574. doi: 10.12000/JR18077 杜蘭, 劉彬, 王燕, 等. 基于卷積神經(jīng)網(wǎng)絡(luò)的SAR圖像目標(biāo)檢測(cè)算法[J]. 電子與信息學(xué)報(bào), 2016, 38(12): 3018–3025. doi: 10.11999/JEIT161032DU Lan, LIU Bin, WANG Yan, et al. Target detection method based on convolutional neural network for SAR image[J]. Journal of Electronics &Information Technology, 2016, 38(12): 3018–3025. doi: 10.11999/JEIT161032 GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 580–587. HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[C]. The 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 346–361. GIRSHICK R. Fast R-CNN[C]. The IEEE international Conference on Computer Vision, Santiago, Chile, 2015: 1440–1448. REN Shaoqing, HE Kaiming, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[C]. The 28th International Conference on Neural Information Processing Systems, Montreal, Canada, 2015: 91–99. DAI Jifeng, LI Yi, HE Kaiming, et al. R-FCN: Object detection via region-based fully convolutional networks[C]. The 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 2016: 379–387. KONG Tao, YAO Anbang, CHEN Yurong, et al. Hypernet: Towards accurate region proposal generation and joint object detection[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 845–853. LIN T Y, DOLLáR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 936–944. HE Kaiming, GKIOXARI G, DOLLáR P, et al. Mask R-CNN[C]. The 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 2980–2988. WANG Sifei, CUI Zongyong, and CAO Zongjie. Target recognition in large scene SAR images based on region proposal regression[C]. The 2017 IEEE International Geoscience and Remote Sensing Symposium, Fort Worth, USA, 2017: 3297–3300. LI Jianwei, QU Changwen, and SHAO Jiaqi. Ship detection in SAR images based on an improved faster R-CNN[C]. The 2017 SAR in Big Data Era: Models, Methods and Applications, Beijing, China, 2017: 1–6. KANG Miao, LENG Xiangguang, LIN Zhao, et al. A modified faster R-CNN based on CFAR algorithm for SAR ship detection[C]. The 2017 International Workshop on Remote Sensing with Intelligent Processing, Shanghai, China, 2017: 1–4. KANG Miao, JI Kefeng, LENG Xiangguang, et al. Contextual region-based convolutional neural network with multilayer fusion for SAR ship detection[J]. Remote Sensing, 2017, 9(8): 860. doi: 10.3390/rs9080860 JIAO Jiao, ZHANG Yue, SUN Hao, et al. A densely connected end-to-end neural network for multiscale and multiscene SAR ship detection[J]. IEEE Access, 2018, 6: 20881–20896. doi: 10.1109/ACCESS.2018.2825376 ZHONG Yanfei, HAN Xiaobing, and ZHANG Liangpei. Multi-class geospatial object detection based on a position-sensitive balancing framework for high spatial resolution remote sensing imagery[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2018, 138: 281–294. doi: 10.1016/j.isprsjprs.2018.02.014 REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]. The 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 779–788. LIU Wei, ANGUELOV D, ERHAN D, et al. SSD: Single shot multibox detector[C]. The 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 21–37. WANG Yuanyuan, WANG Chao, ZHANG Hong, et al. Combing single shot multibox detector with transfer learning for ship detection using Chinese Gaofen-3 images[C]. The 2017 Progress in Electromagnetics Research Symposium - Fall, Singapore, 2018: 712–716. WANG Yuanyuan, WANG Chao, and ZHANG Hong. Combining a single shot multibox detector with transfer learning for ship detection using sentinel-1 SAR images[J]. Remote Sensing Letters, 2018, 9(8): 780–788. doi: 10.1080/2150704X.2018.1475770 KONG Tao, SUN Fuchun, YAO Anbang, et al. Ron: Reverse connection with objectness prior networks for object detection[C]. The 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 5244–5252. CUI Zongyong, DANG Sihang, CAO Zongjie, et al. SAR target recognition in large scene images via region-based convolutional neural networks[J]. Remote Sensing, 2018, 10(5): 776. doi: 10.3390/rs10050776 NI Jiacheng and XU Yuelei. SAR automatic target recognition based on a visual cortical system[C]. The 6th International Congress on Image and Signal Processing, Hangzhou, China, 2013: 778–782. CHEN Sizhe and WANG Haipeng. SAR target recognition based on deep learning[C]. The 2014 International Conference on Data Science and Advanced Analytics, Shanghai, China, 2014: 541–547. WAGNER S. Combination of convolutional feature extraction and support vector machines for radar ATR[C]. The 17th International Conference on Information Fusion, Salamanca, Spain, 2014: 1–6. WANG Haipeng, CHEN Sizhe, XU Feng, et al. Application of deep-learning algorithms to MSTAR data[C]. The 2015 IEEE International Geoscience and Remote Sensing Symposium, Milan, Italy, 2015: 3743–3745. WAGNER S. Morphological component analysis in SAR images to improve the generalization of ATR systems[C]. The 3rd International Workshop on Compressed Sensing Theory and its Applications to Radar, Sonar and Remote Sensing, Pisa, Italy, 2015: 46–50. SCHWEGMANN C P, KLEYNHANS W, SALMON B P, et al. Very deep learning for ship discrimination in Synthetic Aperture Radar imagery[C]. The 2016 IEEE International Geoscience and Remote Sensing Symposium, Beijing, China, 2016: 104–107. CHO J H and PARK C G. Additional feature CNN based automatic target recognition in SAR image[C]. The 40th Asian Conference on Defence Technology, Tokyo, Japan, 2017: 1–4. LIN Zhao, JI Kefeng, KANG Miao, et al. Deep convolutional highway unit network for SAR target classification with limited labeled training data[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(7): 1091–1095. doi: 10.1109/lgrs.2017.2698213 HE Hao, WANG Shicheng, YANG Dongfang, et al. SAR target recognition and unsupervised detection based on convolutional neural network[C]. The 2017 Chinese Automation Congress, Jinan, China, 2017: 435–438. 田壯壯, 占榮輝, 胡杰民, 等. 基于卷積神經(jīng)網(wǎng)絡(luò)的SAR圖像目標(biāo)識(shí)別研究[J]. 雷達(dá)學(xué)報(bào), 2016, 5(3): 320–325. doi: 10.12000/JR16037TIAN Zhuangzhuang, ZHAN Ronghui, HU Jiemin, et al. SAR ATR based on convolutional neural network[J]. Journal of Radars, 2016, 5(3): 320–325. doi: 10.12000/JR16037 CHEN Sizhe, WANG Haipeng, XU Feng, et al. Target classification using the deep convolutional networks for SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(8): 4806–4817. doi: 10.1109/tgrs.2016.2551720 WILMANSKI M, KREUCHER C, and LAUER J. Modern approaches in deep learning for SAR ATR[J]. SPIE, 2016, 9843: 98430N. doi: 10.1117/12.2220290 趙娟萍, 郭煒煒, 柳彬, 等. 基于概率轉(zhuǎn)移卷積神經(jīng)網(wǎng)絡(luò)的含噪標(biāo)記SAR圖像分類[J]. 雷達(dá)學(xué)報(bào), 2017, 6(5): 514–523. doi: 10.12000/JR16140ZHAO Juanping, GUO Weiwei, LIU Bin, et al. Convolutional neural network-based SAR image classification with noisy labels[J]. Journal of Radars, 2017, 6(5): 514–523. doi: 10.12000/JR16140 AMRANI M and JIANG Feng. Deep feature extraction and combination for synthetic aperture radar target classification[J]. Journal of Applied Remote Sensing, 2017, 11(4): 042616. doi: 10.1117/1.Jrs.11.042616 WANG Ning, WANG Yinghua, LIU Hongwei, et al. Feature-fused SAR target discrimination using multiple convolutional neural networks[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(10): 1695–1699. doi: 10.1109/lgrs.2017.2729159 ZHENG Ce, JIANG Xue, and LIU Xingzhao. Generalized synthetic aperture radar automatic target recognition by convolutional neural network with joint use of two-dimensional principal component analysis and support vector machine[J]. Journal of Applied Remote Sensing, 2017, 11(4): 046007. doi: 10.1117/1.Jrs.11.046007 劉晨, 曲長(zhǎng)文, 周強(qiáng), 等. 基于卷積神經(jīng)網(wǎng)絡(luò)遷移學(xué)習(xí)的SAR圖像目標(biāo)分類[J]. 現(xiàn)代雷達(dá), 2018, 40(3): 38–42. doi: 10.16592/j.cnki.1004-7859.2018.03.009LIU Chen, QU Changwen, ZHOU Qiang, et al. SAR image target classification based on convolutional neural network transfer learning[J]. Modern Radar, 2018, 40(3): 38–42. doi: 10.16592/j.cnki.1004-7859.2018.03.009 LI Xuan, LI Chunsheng, WANG Pengbo, et al. SAR ATR based on dividing CNN into CAE and SNN[C]. The 5th IEEE Asia-Pacific Conference on Synthetic Aperture Radar, Singapore, 2015: 676–679. 李松, 魏中浩, 張冰塵, 等. 深度卷積神經(jīng)網(wǎng)絡(luò)在遷移學(xué)習(xí)模式下的SAR目標(biāo)識(shí)別[J]. 中國(guó)科學(xué)院大學(xué)學(xué)報(bào), 2018, 35(1): 75–83. doi: 10.7523/j.issn.2095-6134.2018.01.010LI Song, WEI Zhonghao, ZHANG Bingchen, et al. Target recognition using the transfer learning-based deep convolutional neural networks for SAR images[J]. Journal of University of Chinese Academy of Sciences, 2018, 35(1): 75–83. doi: 10.7523/j.issn.2095-6134.2018.01.010 ?DEGAARD N, KNAPSKOG A O, COCHIN C, et al. Classification of ships using real and simulated data in a convolutional neural network[C]. The 2016 IEEE Radar Conference, Philadelphia, USA, 2016: 1–6. KARABAYIR O, YUCEDAG O M, KARTAL M Z, et al. Convolutional neural networks-based ship target recognition using high resolution range profiles[C]. The 18th International Radar Symposium, Prague, Czech Republic, 2017. BENTES C, VELOTTO D, and LEHNER S. Target classification in oceanographic SAR images with deep neural networks: Architecture and initial results[C]. The 2015 IEEE International Geoscience and Remote Sensing Symposium, Milan, Italy, 2015: 3703–3706. WANG Zhaocheng, DU Lan, WANG Fei, et al. Multi-scale target detection in SAR image based on visual attention model[C]. The 5th IEEE Asia-Pacific Conference on Synthetic Aperture Radar, Singapore, 2015: 704–709. YUAN Lele. A time-frequency feature fusion algorithm based on neural network for HRRP[J]. Progress in Electromagnetics Research, 2017, 55: 63–71. doi: 10.2528/PIERM16123002 BENGIO Y, MESNARD T, FISCHER A, et al. STDP-compatible approximation of backpropagation in an energy-based model[J]. Neural Computation, 2017, 29(3): 555–577. doi: 10.1162/NECO_a_00934 LECUN Y, BENGIO Y, and HINTON G. Deep learning[J]. Nature, 2015, 521(7553): 436–444. doi: 10.1038/nature14539 HOWARD A G, ZHU Menglong, CHEN Bo, et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications[EB/OL]. http://arxiv.org/abs/1704.04861, 2017. HUANG Gao, LIU Zhuang, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 2261–2269. GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]. The 27th International Conference on Neural Information Processing Systems, Montreal, Canada, 2014: 2672–2680. -