基于多流融合生成對抗網絡的遙感圖像融合方法
doi: 10.11999/JEIT190273 cstr: 32379.14.JEIT190273
-
1.
重慶郵電大學計算機科學與技術學院 重慶 400065
-
2.
重慶郵電大學網絡智能研究所 重慶 400065
Remote Sensing Image Fusion Based on Generative Adversarial Network with Multi-stream Fusion Architecture
-
1.
College of Computer, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
-
2.
Institute of Web Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
-
摘要:
由于強大的高質量圖像生成能力,生成對抗網絡在圖像融合和圖像超分辨率等計算機視覺的研究中得到了廣泛關注。目前基于生成對抗網絡的遙感圖像融合方法只使用網絡學習圖像之間的映射,缺乏對遙感圖像中特有的全銳化領域知識的應用。該文提出一種融入全色圖空間結構信息的優(yōu)化生成對抗網絡遙感圖像融合方法。通過梯度算子提取全色圖空間結構信息,將提取的特征同時加入判別器和具有多流融合架構的生成器,設計相應的優(yōu)化目標和融合規(guī)則,從而提高融合圖像的質量。結合WorldView-3衛(wèi)星獲取的圖像進行實驗,結果表明,所提方法能夠生成高質量的融合圖像,在主觀視覺和客觀評價指標上都優(yōu)于大多先進的遙感圖像融合方法。
Abstract:The generative adversarial network receives extensive attention in the study of computer vision such as image fusion and image super-resolution, due to its strong ability of generating high quality images. At present, the remote sensing image fusion method based on generative adversarial network only learns the mapping between the images, and lacks the unique Pan-sharpening domain knowledge. This paper proposes a remote sensing image fusion method based on optimized generative adversarial network with the integration of the spatial structure information of panchromatic image. The proposed algorithm extracts the spatial structure information of the panchromatic image by the gradient operator. The extracted feature would be added to both the discriminator and the generator which uses a multi-stream fusion architecture. The corresponding optimization objective and fusion rules are then designed to improve the quality of the fused image. Experiments on images acquired by WorldView-3 satellites demonstrate that the proposed method can generate high quality fused images, which is better than the most of advanced remote sensing image fusion methods in both subjective visual and objective evaluation indicators.
-
表 1 基于WorldView-3衛(wèi)星的仿真實驗融合結果評價
融合方法 SAM ERGAS $ {Q}_{8} $ SCC ATWT-M3 8.0478 6.5208 0.7137 0.7717 BDSD 7.6455 6.4314 0.8074 0.8834 PanNet 5.8690 4.8296 0.8606 0.9080 PCNN 5.5930 4.5703 0.8968 0.9332 PSGAN 5.5657 4.1941 0.9000 0.9373 本文算法 5.4570 4.2200 0.9053 0.9404 參考值 0 0 1 1 下載: 導出CSV
表 2 基于WorldView-3衛(wèi)星的真實數(shù)據實驗融合結果評價
融合方法 $ {D}_{\lambda } $ $ {D}_{s} $ QNR ATWT-M3 0.0750 0.1099 0.8233 BDSD 0.0528 0.0617 0.8888 PanNet 0.0653 0.0509 0.8871 PCNN 0.0642 0.0486 0.8903 PSGAN 0.0612 0.0452 0.8964 本文算法 0.0554 0.0412 0.9057 參考值 0 0 1 下載: 導出CSV
-
THOMAS C, RANCHIN T, WALD L, et al. Synthesis of multispectral images to high spatial resolution: A critical review of fusion methods based on remote sensing physics[J]. IEEE Transactions on Geoscience and Remote Sensing, 2008, 46(5): 1301–1312. doi: 10.1109/TGRS.2007.912448 LIU Pengfei, XIAO Liang, ZHANG Jun, et al. Spatial-hessian-feature-guided variational model for pan-sharpening[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(4): 2235–2253. doi: 10.1109/TGRS.2015.2497966 紀峰, 李澤仁, 常霞, 等. 基于PCA和NSCT變換的遙感圖像融合方法[J]. 圖學學報, 2017, 38(2): 247–252. doi: 10.11996/JG.j.2095-302X.2017020247JI Feng, LI Zeren, CHANG Xia, et al. Remote sensing image fusion method based on PCA and NSCT transform[J]. Journal of Graphics, 2017, 38(2): 247–252. doi: 10.11996/JG.j.2095-302X.2017020247 RAHMANI S, STRAIT M, MERKURJEV D, et al. An adaptive IHS Pan-sharpening method[J]. IEEE Geoscience and Remote Sensing Letters, 2010, 7(4): 746–750. doi: 10.1109/LGRS.2010.2046715 GARZELLI A, NENCINI F, and CAPOBIANCO L. Optimal MMSE Pan sharpening of very high resolution multispectral images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2008, 46(1): 228–236. doi: 10.1109/TGRS.2007.907604 RANCHIN T and WALD L. Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation[J]. Photogrammetric Engineering and Remote Sensing, 2000, 66(1): 49–61. 肖化超, 周詮, 鄭小松. 基于IHS變換和Curvelet變換的衛(wèi)星遙感圖像融合方法[J]. 華南理工大學學報: 自然科學版, 2016, 44(1): 58–64. doi: 10.3969/j.issn.1000-565X.2016.01.009XIAO Huachao, ZHOU Quan, and ZHENG Xiaosong. A fusion method of satellite remote sensing image based on IHS transform and Curvelet transform[J]. Journal of South China University of Technology:Natural Science Edition, 2016, 44(1): 58–64. doi: 10.3969/j.issn.1000-565X.2016.01.009 ZENG Delu, HU Yuwen, HUANG Yue, et al. Pan-sharpening with structural consistency and ?1/2 gradient prior[J]. Remote Sensing Letters, 2016, 7(12): 1170–1179. doi: 10.1080/2150704X.2016.1222098 LIU Yu, CHEN Xun, WANG Zengfu, et al. Deep learning for pixel-level image fusion: Recent advances and future prospects[J]. Information Fusion, 2018, 42: 158–173. doi: 10.1016/J.INFFUS.2017.10.007 YANG Junfeng, FU Xueyang, HU Yuwen, et al. PanNet: A deep network architecture for pan-sharpening[C]. 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 1753–1761. doi: 10.1109/ICCV.2017.193. MASI G, COZZOLINO D, VERDOLIVA L, et al. Pansharpening by convolutional neural networks[J]. Remote Sensing, 2016, 8(7): 594. doi: 10.3390/rs8070594 LIU Xiangyu, WANG Yunhong, and LIU Qingjie. PSGAN: A generative adversarial network for remote sensing image Pan-sharpening[C]. The 25th IEEE International Conference on Image Processing, Athens, Greece, 2018: 873–877. doi: 10.1109/ICIP.2018.8451049. GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]. The 27th International Conference on Neural Information Processing Systems, Cambridge, USA, 2014: 2672–2680. AIAZZI B, ALPARONE L, BARONTI S, et al. MTF-tailored multiscale fusion of high-resolution MS and Pan imagery[J]. Photogrammetric Engineering & Remote Sensing, 2006, 72(5): 591–596. doi: 10.14358/PERS.72.5.591 RONNEBERGER O, FISCHER P, and BROX T. U-net: Convolutional networks for biomedical image segmentation[C]. The 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015: 234–241. doi: 10.1007/978-3-319-24574-4_28. GARZELLI A and NENCINI F. Hypercomplex quality assessment of multi/hyperspectral images[J]. IEEE Geoscience and Remote Sensing Letters, 2009, 6(4): 662–665. doi: 10.1109/LGRS.2009.2022650 WALD L. Data Fusion: Definitions and Architectures: Fusion of Images of Different Spatial Resolutions[M]. Paris, France: Ecole des Mines de Paris, 2002: 165–189. VIVONE G, ALPARONE L, CHANUSSOT J, et al. A critical comparison among pansharpening algorithms[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(5): 2565–2586. doi: 10.1109/TGRS.2014.2361734 張新曼, 韓九強. 基于視覺特性的多尺度對比度塔圖像融合及性能評價[J]. 西安交通大學學報, 2004, 38(4): 380–383. doi: 10.3321/j.issn.0253-987X.2004.04.013ZHANG Xinman and HAN Jiuqiang. Image fusion of multiscale contrast pyramid-Based vision feature and its performance evaluation[J]. Journal of Xi’an Jiaotong University, 2004, 38(4): 380–383. doi: 10.3321/j.issn.0253-987X.2004.04.013 ALPARONE L, AIAZZI B, BARONTI S, et al. Multispectral and panchromatic data fusion assessment without reference[J]. Photogrammetric Engineering & Remote Sensing, 2008, 74(2): 193–200. doi: 10.14358/PERS.74.2.193 -