一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級(jí)搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問(wèn)題, 您可以本頁(yè)添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號(hào)碼
標(biāo)題
留言內(nèi)容
驗(yàn)證碼

基于多域雷達(dá)回波數(shù)據(jù)融合的海面小目標(biāo)分類網(wǎng)絡(luò)模型

趙子健 許述文 水鵬朗

趙子健, 許述文, 水鵬朗. 基于多域雷達(dá)回波數(shù)據(jù)融合的海面小目標(biāo)分類網(wǎng)絡(luò)模型[J]. 電子與信息學(xué)報(bào), 2025, 47(3): 696-706. doi: 10.11999/JEIT240818
引用本文: 趙子健, 許述文, 水鵬朗. 基于多域雷達(dá)回波數(shù)據(jù)融合的海面小目標(biāo)分類網(wǎng)絡(luò)模型[J]. 電子與信息學(xué)報(bào), 2025, 47(3): 696-706. doi: 10.11999/JEIT240818
ZHAO Zijian, XU Shuwen, SHUI Penglang. A Network Model for Sea Surface Small Targets Classification Based on Multidomain Radar Echo Data Fusion[J]. Journal of Electronics & Information Technology, 2025, 47(3): 696-706. doi: 10.11999/JEIT240818
Citation: ZHAO Zijian, XU Shuwen, SHUI Penglang. A Network Model for Sea Surface Small Targets Classification Based on Multidomain Radar Echo Data Fusion[J]. Journal of Electronics & Information Technology, 2025, 47(3): 696-706. doi: 10.11999/JEIT240818

基于多域雷達(dá)回波數(shù)據(jù)融合的海面小目標(biāo)分類網(wǎng)絡(luò)模型

doi: 10.11999/JEIT240818 cstr: 32379.14.JEIT240818
基金項(xiàng)目: 國(guó)家自然科學(xué)基金(62371382)
詳細(xì)信息
    作者簡(jiǎn)介:

    趙子?。耗?,博士生,研究方向?yàn)楹C嫘∧繕?biāo)識(shí)別、深度學(xué)習(xí)、機(jī)器學(xué)習(xí)、目標(biāo)檢測(cè)等

    許述文:男,教授,研究方向?yàn)槔走_(dá)目標(biāo)檢測(cè)與識(shí)別、機(jī)器學(xué)習(xí)、時(shí)頻分析、SAR圖像處理等

    水鵬朗:男,教授,研究方向?yàn)楹ks波建模、雷達(dá)目標(biāo)檢測(cè)與識(shí)別、圖像處理等

    通訊作者:

    水鵬朗 plshui@xidian.edu.cn

  • 中圖分類號(hào): TN957.52

A Network Model for Sea Surface Small Targets Classification Based on Multidomain Radar Echo Data Fusion

Funds: The National Natural Science Foundation of China (62371382)
  • 摘要: 海面小目標(biāo)識(shí)別是海事雷達(dá)監(jiān)視任務(wù)中一個(gè)重要且具有挑戰(zhàn)性的問(wèn)題。由于海面小目標(biāo)類型多樣、環(huán)境復(fù)雜多變,對(duì)其進(jìn)行有效分類存在較大困難。在高分辨體制雷達(dá)下,海面小目標(biāo)通常只占據(jù)一或幾個(gè)距離單元,缺乏足夠的空間散射結(jié)構(gòu)信息,因此目標(biāo)的雷達(dá)截面積(RCS)起伏和徑向速度變化成為分類的主要依據(jù)。為此,該文提出一種基于多域雷達(dá)回波數(shù)據(jù)融合的分類網(wǎng)絡(luò)模型,用于海面小目標(biāo)的分類任務(wù)。由于不同域的數(shù)據(jù)具有其特殊的物理意義,因此該文構(gòu)建了時(shí)域LeNet(T-LeNet)神經(jīng)網(wǎng)絡(luò)模塊和時(shí)頻特征提取神經(jīng)網(wǎng)絡(luò)模塊,分別從雷達(dá)海面回波信號(hào)的幅度序列和時(shí)頻分布(TFD)即時(shí)頻圖中提取特征。其中幅度序列主要反映了目標(biāo)RCS的起伏特性,而時(shí)頻圖不僅反映RCS起伏特性,還能體現(xiàn)目標(biāo)徑向速度的變化。最后,利用IPIX, CSIR數(shù)據(jù)庫(kù)和自測(cè)的無(wú)人機(jī)數(shù)據(jù)集構(gòu)建了包括4種海面小目標(biāo)的數(shù)據(jù)集:錨定漂浮小球、漂浮船只、低空無(wú)人機(jī)(UAV)和移動(dòng)的快艇。實(shí)驗(yàn)結(jié)果表明所提方法具有良好的識(shí)別能力。
  • 圖  1  4種海面小目標(biāo)

    圖  2  4類目標(biāo)的幅度序列和時(shí)頻圖示例

    圖  3  目標(biāo)識(shí)別網(wǎng)絡(luò)流程圖

    圖  4  網(wǎng)絡(luò)結(jié)構(gòu)

    圖  5  混淆矩陣

    圖  6  不同實(shí)驗(yàn)在訓(xùn)練集和驗(yàn)證集的損失與準(zhǔn)確率

    表  1  4類目標(biāo)與其對(duì)應(yīng)的雷達(dá)參數(shù)

    目標(biāo)類型 數(shù)據(jù)來(lái)源 距離分辨率(m) 重頻(kHz) 載頻(GHz) 極化方式 工作模式 波束寬度(°) 海況(級(jí)) 訓(xùn)練樣本數(shù) 測(cè)試樣本數(shù)
    錨定漂浮小球 IPIX93 30 1 9.39 HH/HV/VH/VV 駐留模式 0.9 2/3 21940 6560
    漂浮船只 IPIX98 30 1 9.39 HH/HV/VH/VV 駐留模式 0.9 / 12768 3416
    低空無(wú)人機(jī) 靈山島 3 4 10.00 HH/VV 駐留模式 1.1 2 7569 1 893
    移動(dòng)的快艇 CSIR 15 2.5/5 6.90 VV 跟蹤模式 1.8 3 2920 964
    下載: 導(dǎo)出CSV

    表  2  混淆矩陣

    預(yù)測(cè)類別
    目標(biāo)1目標(biāo)2目標(biāo)3目標(biāo)4
    真實(shí)類別目標(biāo)1T1P1F1P2F1P3F1P4
    目標(biāo)2F2P1T2P2F2P3F2P4
    目標(biāo)3F3P1F3P2T3P3F3P4
    目標(biāo)4F4P1F4P2F4P3T4P4
    下載: 導(dǎo)出CSV

    表  3  不同實(shí)驗(yàn)在6個(gè)評(píng)價(jià)指標(biāo)下的分類結(jié)果

    準(zhǔn)確率誤差精確度召回率F1-measureKappa
    時(shí)頻圖+AlexNet0.77730.22270.81390.79120.80240.6403
    時(shí)頻圖+Vgg16[5]0.80220.19780.84610.82020.83300.6792
    時(shí)頻圖+ResNet180.81450.18550.84870.82380.83610.7006
    幅度序列+T-LeNet0.92500.07500.91450.91300.91380.8823
    幅度序列+時(shí)頻圖+T-LeNet+AlexNet0.94260.05740.94400.95280.94840.9106
    幅度序列+時(shí)頻圖+T-LeNet+Vgg160.95490.04510.95580.95780.95680.9296
    幅度序列+時(shí)頻圖+T-LeNet+ResNet180.97210.02790.97080.97760.97420.9567
    下載: 導(dǎo)出CSV

    表  4  不同ResNet網(wǎng)絡(luò)在6個(gè)評(píng)價(jià)指標(biāo)下的分類結(jié)果

    準(zhǔn)確率誤差精確度召回率F1-measureKappa
    時(shí)頻圖+ResNet180.81450.18550.84870.82380.83610.7006
    時(shí)頻圖+ResNet340.82450.17550.86770.83790.85250.7165
    時(shí)頻圖+ResNet500.82020.17980.86190.83590.84870.7100
    幅度序列+時(shí)頻圖+T-LeNet+ResNet180.97210.02790.97080.97760.97420.9567
    幅度序列+時(shí)頻圖+T-LeNet+ResNet340.97260.02740.97290.97750.97520.9574
    幅度序列+時(shí)頻圖+T-LeNet+ResNet500.97360.02640.97070.97770.97420.9589
    下載: 導(dǎo)出CSV

    表  5  網(wǎng)絡(luò)的參數(shù)量、訓(xùn)練時(shí)間、測(cè)試時(shí)間和單個(gè)樣本測(cè)試時(shí)間

    網(wǎng)絡(luò)參數(shù)量(M)訓(xùn)練時(shí)間(min)測(cè)試時(shí)間(s)單個(gè)樣本測(cè)試時(shí)間(ms)
    T-LeNet8.46697.2523.871.86
    AlexNet61.104887.1346.973.66
    Vgg16138.3651216.1555.054.29
    ResNet1811.1786120.1245.943.58
    ResNet3421.2867195.1972.655.66
    ResNet5023.5162330.33136.3310.62
    T-LeNet+AlexNet71.792295.3751.464.01
    T-LeNet+Vgg16149.0489233.6759.674.65
    T-LeNet+ResNet1821.7429130.3250.693.95
    T-LeNet+ResNet3431.8511203.6285.376.65
    T-LeNet+ResNet5034.4677342.09145.3911.33
    下載: 導(dǎo)出CSV
  • [1] ZHANG Tianwen, ZHANG Xiaoling, KE Xiao, et al. HOG-ShipCLSNet: A novel deep learning network with HOG feature fusion for SAR ship classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5210322. doi: 10.1109/TGRS.2021.3082759.
    [2] 關(guān)鍵. 雷達(dá)海上目標(biāo)特性綜述[J]. 雷達(dá)學(xué)報(bào), 2020, 9(4): 674–683. doi: 10.12000/JR20114.

    GUAN Jian. Summary of marine radar target characteristics[J]. Journal of Radars, 2020, 9(4): 674–683. doi: 10.12000/JR20114.
    [3] NI Jun, ZHANG Fan, YIN Qiang, et al. Random neighbor pixel-block-based deep recurrent learning for polarimetric SAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(9): 7557–7569. doi: 10.1109/TGRS.2020.3037209.
    [4] LEE S J, LEE M J, KIM K T, et al. Classification of ISAR images using variable cross-range resolutions[J]. IEEE Transactions on Aerospace and Electronic Systems, 2018, 54(5): 2291–2303. doi: 10.1109/TAES.2018.2814211.
    [5] XU Shuwen, RU Hongtao, LI Dongchen, et al. Marine radar small target classification based on block-whitened time-frequency spectrogram and pre-trained CNN[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 5101311. doi: 10.1109/TGRS.2023.3240693.
    [6] GUO Zixun and SHUI Penglang. Anomaly based sea-surface small target detection using K-nearest neighbor classification[J]. IEEE Transactions on Aerospace and Electronic Systems, 2020, 56(6): 4947–4964. doi: 10.1109/TAES.2020.3011868.
    [7] KUO B C, HO H H, LI C H, et al. A kernel-based feature selection method for SVM with RBF kernel for hyperspectral image classification[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2014, 7(1): 317–326. doi: 10.1109/JSTARS.2013.2262926.
    [8] YIN Qiang, CHENG Jianda, ZHANG Fan, et al. Interpretable POLSAR image classification based on adaptive-dimension feature space decision tree[J]. IEEE Access, 2020, 8: 173826–173837. doi: 10.1109/ACCESS.2020.3023134.
    [9] JIA Qingwei, DENG Tingquan, WANG Yan, et al. Discriminative label correlation based robust structure learning for multi-label feature selection[J]. Pattern Recognition, 2024, 154: 110583. doi: 10.1016/j.patcog.2024.110583.
    [10] ZHONG Jingyu, SHANG Ronghua, ZHAO Feng, et al. Negative label and noise information guided disambiguation for partial multi-label learning[J]. IEEE Transactions on Multimedia, 2024, 26: 9920–9935. doi: 10.1109/TMM.2024.3402534.
    [11] ZHAO Jie, LING Yun, HUANG Faliang, et al. Incremental feature selection for dynamic incomplete data using sub-tolerance relations[J]. Pattern Recognition, 2024, 148: 110125. doi: 10.1016/j.patcog.2023.110125.
    [12] ZOU Yizhang, HU Xuegang, and LI Peipei. Gradient-based multi-label feature selection considering three-way variable interaction[J]. Pattern Recognition, 2024, 145: 109900. doi: 10.1016/j.patcog.2023.109900.
    [13] SUN Xu, GAO Junyu, and YUAN Yuan. Alignment and fusion using distinct sensor data for multimodal aerial scene classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62: 5626811. doi: 10.1109/TGRS.2024.3406697.
    [14] WU Xin, HONG Danfeng, and CHANUSSOT J. Convolutional neural networks for multimodal remote sensing data classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5517010. doi: 10.1109/TGRS.2021.3124913.
    [15] DUAN Guoxing, WANG Yunhua, ZHANG Yanmin, et al. A network model for detecting marine floating weak targets based on multimodal data fusion of radar echoes[J]. Sensors, 2022, 22(23): 9163. doi: 10.3390/s22239163.
    [16] Cognitive Systems Laboratory. The IPIX radar database[EB/OL]. http://soma.ece.mcmaster.ca/ipix, 2001.
    [17] The Defense, Peace, Safety, and Security Unit of the Council for Scientific and Industrial Research. The Fynmeet radar database[EB/OL]. http://www.csir.co.ca/small_boat_detection, 2007.
    [18] RICHARD C. Time-frequency-based detection using discrete-time discrete-frequency Wigner distributions[J]. IEEE Transactions on Signal Processing, 2002, 50(9): 2170–2176. doi: 10.1109/TSP.2002.801927.
    [19] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90.
    [20] KAYED M, ANTER A, and MOHAMED H. Classification of garments from fashion MNIST dataset using CNN LeNet-5 architecture[C]. 2020 International Conference on Innovative Trends in Communication and Computer Engineering (ITCE), Aswan, Egypt, 2020: 238–243. doi: 10.1109/ITCE48509.2020.9047776.
    [21] KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84–90. doi: 10.1145/3065386.
    [22] SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. The 3rd International Conference on Learning Representations, San Diego, USA, 2015.
  • 加載中
圖(6) / 表(5)
計(jì)量
  • 文章訪問(wèn)數(shù):  412
  • HTML全文瀏覽量:  136
  • PDF下載量:  70
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2024-09-24
  • 修回日期:  2025-02-21
  • 網(wǎng)絡(luò)出版日期:  2025-03-06
  • 刊出日期:  2025-03-01

目錄

    /

    返回文章
    返回