一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級搜索

留言板

尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

姓名
郵箱
手機號碼
標題
留言內容
驗證碼

基于多流融合生成對抗網絡的遙感圖像融合方法

雷大江 張策 李智星 吳渝

雷大江, 張策, 李智星, 吳渝. 基于多流融合生成對抗網絡的遙感圖像融合方法[J]. 電子與信息學報, 2020, 42(8): 1942-1949. doi: 10.11999/JEIT190273
引用本文: 雷大江, 張策, 李智星, 吳渝. 基于多流融合生成對抗網絡的遙感圖像融合方法[J]. 電子與信息學報, 2020, 42(8): 1942-1949. doi: 10.11999/JEIT190273
Dajiang LEI, Ce ZHANG, Zhixing LI, Yu WU. Remote Sensing Image Fusion Based on Generative Adversarial Network with Multi-stream Fusion Architecture[J]. Journal of Electronics & Information Technology, 2020, 42(8): 1942-1949. doi: 10.11999/JEIT190273
Citation: Dajiang LEI, Ce ZHANG, Zhixing LI, Yu WU. Remote Sensing Image Fusion Based on Generative Adversarial Network with Multi-stream Fusion Architecture[J]. Journal of Electronics & Information Technology, 2020, 42(8): 1942-1949. doi: 10.11999/JEIT190273

基于多流融合生成對抗網絡的遙感圖像融合方法

doi: 10.11999/JEIT190273 cstr: 32379.14.JEIT190273
基金項目: 重慶市留學歸國人員創(chuàng)新創(chuàng)業(yè)項目(cx2018120),國家社會科學基金(17XFX013),重慶市基礎研究與前沿探索項目(cstc2015jcyjA40018)
詳細信息
    作者簡介:

    雷大江:男,1979年生,副教授,研究方向為機器學習

    張策:男,1994年生,碩士生,研究方向為圖像處理

    李智星:男,1985年生,副教授,研究方向為自然語言處理

    吳渝:女,1970年生,教授,研究方向為網絡智能

    通訊作者:

    雷大江 leidj@cqupt.edu.cn

  • 中圖分類號: TN911.73; TP751

Remote Sensing Image Fusion Based on Generative Adversarial Network with Multi-stream Fusion Architecture

Funds: The Chongqing Innovative Project of Overseas Study (cx2018120), The National Social Science Foundation of China (17XFX013), The Natural Science Foundation of Chongqing (cstc2015jcyjA40018)
  • 摘要:

    由于強大的高質量圖像生成能力,生成對抗網絡在圖像融合和圖像超分辨率等計算機視覺的研究中得到了廣泛關注。目前基于生成對抗網絡的遙感圖像融合方法只使用網絡學習圖像之間的映射,缺乏對遙感圖像中特有的全銳化領域知識的應用。該文提出一種融入全色圖空間結構信息的優(yōu)化生成對抗網絡遙感圖像融合方法。通過梯度算子提取全色圖空間結構信息,將提取的特征同時加入判別器和具有多流融合架構的生成器,設計相應的優(yōu)化目標和融合規(guī)則,從而提高融合圖像的質量。結合WorldView-3衛(wèi)星獲取的圖像進行實驗,結果表明,所提方法能夠生成高質量的融合圖像,在主觀視覺和客觀評價指標上都優(yōu)于大多先進的遙感圖像融合方法。

  • 圖  1  用于低分辨率的多光譜圖像與全色圖像梯度信息融合的生成對抗網絡框架

    圖  2  多流融合框架詳細的結構

    圖  3  基于WorldView-3衛(wèi)星數(shù)據集的仿真實驗融合結果

    圖  4  圖3中各方法與真實圖像對比的殘差圖

    圖  5  基于WorldView-3衛(wèi)星的真實數(shù)據融合結果

    圖  6  WorldView-3衛(wèi)星真實數(shù)據融合結果關鍵局部區(qū)域

    表  1  基于WorldView-3衛(wèi)星的仿真實驗融合結果評價

    融合方法SAMERGAS$ {Q}_{8} $SCC
    ATWT-M38.04786.52080.71370.7717
    BDSD7.64556.43140.80740.8834
    PanNet5.86904.82960.86060.9080
    PCNN5.59304.57030.89680.9332
    PSGAN5.56574.19410.90000.9373
    本文算法5.45704.22000.90530.9404
    參考值0011
    下載: 導出CSV

    表  2  基于WorldView-3衛(wèi)星的真實數(shù)據實驗融合結果評價

    融合方法$ {D}_{\lambda } $$ {D}_{s} $QNR
    ATWT-M30.07500.10990.8233
    BDSD0.0528 0.06170.8888
    PanNet0.06530.05090.8871
    PCNN0.06420.04860.8903
    PSGAN0.06120.04520.8964
    本文算法0.05540.0412 0.9057
    參考值001
    下載: 導出CSV
  • THOMAS C, RANCHIN T, WALD L, et al. Synthesis of multispectral images to high spatial resolution: A critical review of fusion methods based on remote sensing physics[J]. IEEE Transactions on Geoscience and Remote Sensing, 2008, 46(5): 1301–1312. doi: 10.1109/TGRS.2007.912448
    LIU Pengfei, XIAO Liang, ZHANG Jun, et al. Spatial-hessian-feature-guided variational model for pan-sharpening[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(4): 2235–2253. doi: 10.1109/TGRS.2015.2497966
    紀峰, 李澤仁, 常霞, 等. 基于PCA和NSCT變換的遙感圖像融合方法[J]. 圖學學報, 2017, 38(2): 247–252. doi: 10.11996/JG.j.2095-302X.2017020247

    JI Feng, LI Zeren, CHANG Xia, et al. Remote sensing image fusion method based on PCA and NSCT transform[J]. Journal of Graphics, 2017, 38(2): 247–252. doi: 10.11996/JG.j.2095-302X.2017020247
    RAHMANI S, STRAIT M, MERKURJEV D, et al. An adaptive IHS Pan-sharpening method[J]. IEEE Geoscience and Remote Sensing Letters, 2010, 7(4): 746–750. doi: 10.1109/LGRS.2010.2046715
    GARZELLI A, NENCINI F, and CAPOBIANCO L. Optimal MMSE Pan sharpening of very high resolution multispectral images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2008, 46(1): 228–236. doi: 10.1109/TGRS.2007.907604
    RANCHIN T and WALD L. Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation[J]. Photogrammetric Engineering and Remote Sensing, 2000, 66(1): 49–61.
    肖化超, 周詮, 鄭小松. 基于IHS變換和Curvelet變換的衛(wèi)星遙感圖像融合方法[J]. 華南理工大學學報: 自然科學版, 2016, 44(1): 58–64. doi: 10.3969/j.issn.1000-565X.2016.01.009

    XIAO Huachao, ZHOU Quan, and ZHENG Xiaosong. A fusion method of satellite remote sensing image based on IHS transform and Curvelet transform[J]. Journal of South China University of Technology:Natural Science Edition, 2016, 44(1): 58–64. doi: 10.3969/j.issn.1000-565X.2016.01.009
    ZENG Delu, HU Yuwen, HUANG Yue, et al. Pan-sharpening with structural consistency and ?1/2 gradient prior[J]. Remote Sensing Letters, 2016, 7(12): 1170–1179. doi: 10.1080/2150704X.2016.1222098
    LIU Yu, CHEN Xun, WANG Zengfu, et al. Deep learning for pixel-level image fusion: Recent advances and future prospects[J]. Information Fusion, 2018, 42: 158–173. doi: 10.1016/J.INFFUS.2017.10.007
    YANG Junfeng, FU Xueyang, HU Yuwen, et al. PanNet: A deep network architecture for pan-sharpening[C]. 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 1753–1761. doi: 10.1109/ICCV.2017.193.
    MASI G, COZZOLINO D, VERDOLIVA L, et al. Pansharpening by convolutional neural networks[J]. Remote Sensing, 2016, 8(7): 594. doi: 10.3390/rs8070594
    LIU Xiangyu, WANG Yunhong, and LIU Qingjie. PSGAN: A generative adversarial network for remote sensing image Pan-sharpening[C]. The 25th IEEE International Conference on Image Processing, Athens, Greece, 2018: 873–877. doi: 10.1109/ICIP.2018.8451049.
    GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]. The 27th International Conference on Neural Information Processing Systems, Cambridge, USA, 2014: 2672–2680.
    AIAZZI B, ALPARONE L, BARONTI S, et al. MTF-tailored multiscale fusion of high-resolution MS and Pan imagery[J]. Photogrammetric Engineering & Remote Sensing, 2006, 72(5): 591–596. doi: 10.14358/PERS.72.5.591
    RONNEBERGER O, FISCHER P, and BROX T. U-net: Convolutional networks for biomedical image segmentation[C]. The 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015: 234–241. doi: 10.1007/978-3-319-24574-4_28.
    GARZELLI A and NENCINI F. Hypercomplex quality assessment of multi/hyperspectral images[J]. IEEE Geoscience and Remote Sensing Letters, 2009, 6(4): 662–665. doi: 10.1109/LGRS.2009.2022650
    WALD L. Data Fusion: Definitions and Architectures: Fusion of Images of Different Spatial Resolutions[M]. Paris, France: Ecole des Mines de Paris, 2002: 165–189.
    VIVONE G, ALPARONE L, CHANUSSOT J, et al. A critical comparison among pansharpening algorithms[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(5): 2565–2586. doi: 10.1109/TGRS.2014.2361734
    張新曼, 韓九強. 基于視覺特性的多尺度對比度塔圖像融合及性能評價[J]. 西安交通大學學報, 2004, 38(4): 380–383. doi: 10.3321/j.issn.0253-987X.2004.04.013

    ZHANG Xinman and HAN Jiuqiang. Image fusion of multiscale contrast pyramid-Based vision feature and its performance evaluation[J]. Journal of Xian Jiaotong University, 2004, 38(4): 380–383. doi: 10.3321/j.issn.0253-987X.2004.04.013
    ALPARONE L, AIAZZI B, BARONTI S, et al. Multispectral and panchromatic data fusion assessment without reference[J]. Photogrammetric Engineering & Remote Sensing, 2008, 74(2): 193–200. doi: 10.14358/PERS.74.2.193
  • 加載中
圖(6) / 表(2)
計量
  • 文章訪問數(shù):  2459
  • HTML全文瀏覽量:  1115
  • PDF下載量:  155
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2019-04-19
  • 修回日期:  2020-02-21
  • 網絡出版日期:  2020-06-26
  • 刊出日期:  2020-08-18

目錄

    /

    返回文章
    返回