一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級(jí)搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問(wèn)題, 您可以本頁(yè)添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號(hào)碼
標(biāo)題
留言內(nèi)容
驗(yàn)證碼

基于密集殘差和質(zhì)量評(píng)估引導(dǎo)的頻率分離生成對(duì)抗超分辨率重構(gòu)網(wǎng)絡(luò)

韓玉蘭 崔玉杰 羅軼宏 蘭朝鳳

韓玉蘭, 崔玉杰, 羅軼宏, 蘭朝鳳. 基于密集殘差和質(zhì)量評(píng)估引導(dǎo)的頻率分離生成對(duì)抗超分辨率重構(gòu)網(wǎng)絡(luò)[J]. 電子與信息學(xué)報(bào), 2024, 46(12): 4563-4574. doi: 10.11999/JEIT240388
引用本文: 韓玉蘭, 崔玉杰, 羅軼宏, 蘭朝鳳. 基于密集殘差和質(zhì)量評(píng)估引導(dǎo)的頻率分離生成對(duì)抗超分辨率重構(gòu)網(wǎng)絡(luò)[J]. 電子與信息學(xué)報(bào), 2024, 46(12): 4563-4574. doi: 10.11999/JEIT240388
HAN Yulan, CUI Yujie, LUO Yihong, LAN Chaofeng. Frequency Separation Generative Adversarial Super-resolution Reconstruction Network Based on Dense Residual and Quality Assessment[J]. Journal of Electronics & Information Technology, 2024, 46(12): 4563-4574. doi: 10.11999/JEIT240388
Citation: HAN Yulan, CUI Yujie, LUO Yihong, LAN Chaofeng. Frequency Separation Generative Adversarial Super-resolution Reconstruction Network Based on Dense Residual and Quality Assessment[J]. Journal of Electronics & Information Technology, 2024, 46(12): 4563-4574. doi: 10.11999/JEIT240388

基于密集殘差和質(zhì)量評(píng)估引導(dǎo)的頻率分離生成對(duì)抗超分辨率重構(gòu)網(wǎng)絡(luò)

doi: 10.11999/JEIT240388 cstr: 32379.14.JEIT240388
基金項(xiàng)目: 國(guó)家自然科學(xué)基金(11804068),黑龍江省省屬高等學(xué)?;究蒲袠I(yè)務(wù)(2020-KYYWF-0342)
詳細(xì)信息
    作者簡(jiǎn)介:

    韓玉蘭:女,講師,研究方向?yàn)槿斯ぶ悄芘c計(jì)算機(jī)視覺(jué)、大數(shù)據(jù)分析與預(yù)測(cè)等

    崔玉杰:女,碩士,研究方向?yàn)閳D像重構(gòu)

    羅軼宏:女,碩士,研究方向?yàn)閳D像重構(gòu)

    蘭朝鳳:女,副教授,研究方向?yàn)檎Z(yǔ)音信號(hào)處理與分析、水下信號(hào)分析與處理等

    通訊作者:

    韓玉蘭 hanyulan@hrbust.edu.cn

  • 中圖分類號(hào): TN911.73; TP391

Frequency Separation Generative Adversarial Super-resolution Reconstruction Network Based on Dense Residual and Quality Assessment

Funds: The National Natural Science Foundation of China (11804068), The Fundamental Research Funds for the Provincial Universities (2020-KYYWF-0342)
  • 摘要: 生成對(duì)抗網(wǎng)絡(luò)因其為盲超分辨率重構(gòu)提供了新的思路而備受關(guān)注。針對(duì)現(xiàn)有方法未充分考慮圖像退化過(guò)程中的低頻保留特性而對(duì)高低頻成分采用相同的處理方式,缺乏對(duì)頻率細(xì)節(jié)有效利用,難以獲得較好重構(gòu)效果的問(wèn)題,該文提出一種基于密集殘差和質(zhì)量評(píng)估引導(dǎo)的頻率分離生成對(duì)抗超分辨率重構(gòu)網(wǎng)絡(luò)。該網(wǎng)絡(luò)采用頻率分離思想,對(duì)圖像的高頻和低頻信息分開(kāi)處理,從而提高高頻信息捕捉能力,簡(jiǎn)化低頻特征處理。該文對(duì)生成器中的基礎(chǔ)塊進(jìn)行設(shè)計(jì),將空間特征變換層融入密集寬激活殘差中,增強(qiáng)深層特征表征能力的同時(shí)對(duì)局部信息差異化處理。此外,利用視覺(jué)幾何組網(wǎng)絡(luò)(VGG)設(shè)計(jì)了專門針對(duì)超分辨率重構(gòu)圖像的無(wú)參考質(zhì)量評(píng)估網(wǎng)絡(luò),為重構(gòu)網(wǎng)絡(luò)提供全新的質(zhì)量評(píng)估損失,進(jìn)一步提高重構(gòu)圖像的視覺(jué)效果。實(shí)驗(yàn)結(jié)果表明,同當(dāng)前先進(jìn)的同類方法比,該方法在多個(gè)數(shù)據(jù)集上具有更佳的重構(gòu)效果。由此表明,采用頻率分離思想的生成對(duì)抗網(wǎng)絡(luò)進(jìn)行超分辨率重構(gòu),可以有效利用圖像頻率成分,提高重構(gòu)效果。
  • 圖  1  DR-QA-FSGAN網(wǎng)絡(luò)總體結(jié)構(gòu)

    圖  2  生成器

    圖  3  帶SFT層的密集寬激活殘差塊DWRB

    圖  4  質(zhì)量評(píng)估網(wǎng)絡(luò)

    圖  5  不同方法在BSDS100數(shù)據(jù)集“69015”圖像4倍超分辨率重構(gòu)比較

    圖  6  不同方法在自制數(shù)據(jù)集“02”圖像4倍超分辨率重構(gòu)比較

    圖  7  不同濾波器在Set5數(shù)據(jù)集“baby”圖像4倍超分辨率重構(gòu)比較

    圖  8  不同模塊在Set5數(shù)據(jù)集“butterfly”圖像4倍超分辨率重構(gòu)比較

    圖  9  不同損失函數(shù)在Set5數(shù)據(jù)集“bird”圖像4倍超分辨率重構(gòu)比較

    表  1  不同方法各數(shù)據(jù)集的PSNR (dB)和SSIM均值比較(×4)

    算法Set5Set14BSDS100Manga109
    PSNR↑SSIM↑PSNR↑SSIM↑PSNR↑SSIM↑PSNR↑SSIM↑
    SRGAN[11]28.5740.81825.6740.69225.1560.65426.4880.828
    ESRGAN[12]30.4380.85226.2780.69925.3230.65128.2450.859
    SFTGAN[14]27.5780.80926.9680.72925.5010.65328.1820.858
    DSGAN[17]30.3920.85426.6440.71425.4470.65527.9650.853
    SRCGAN[13]28.0680.78926.0710.69625.6590.65725.2950.796
    FxSR[15]30.6370.84926.7080.71926.1440.68427.6470.844
    SROOE[16]30.8620.86627.2310.73126.1950.68727.8520.849
    WGSR[19]30.3730.85127.0230.72726.3720.68428.2870.861
    本文30.9040.87227.7150.74926.8380.70128.3120.867
    下載: 導(dǎo)出CSV

    表  2  自制數(shù)據(jù)集不同方法NIQE和FVSD平均值比較(×4)

    算法NIQE↓FVSD↑
    SRGAN[11]12.843.84
    ESRGAN[12]8.626.46
    SFTGAN[14]8.466.35
    DSGAN[17]8.416.51
    SRCGAN[13]10.214.25
    FxSR[15]8.376.48
    SROOE[16]8.196.49
    WGSR[19]8.146.52
    本文8.116.54
    下載: 導(dǎo)出CSV

    表  3  不同濾波器重構(gòu)效果的影響

    濾波器PSNR(dB)↑SSIM↑
    無(wú)28.8310.835
    鄰域平均28.9410.833
    高斯差分29.0150.837
    下載: 導(dǎo)出CSV

    表  4  含有不同模塊對(duì)應(yīng)的PSNR (dB)和SSIM均值

    分支結(jié)構(gòu) SFT層 質(zhì)量評(píng)估網(wǎng)絡(luò) PSNR (dB)↑ SSIM↑
    $\surd $ $ \times $ $ \times $ 28.772 0.828
    $ \times $ $\surd $ $ \times $ 28.402 0.821
    $ \times $ $ \times $ $\surd $ 28.642 0.823
    $\surd $ $\surd $ $\surd $ 29.015 0.837
    下載: 導(dǎo)出CSV

    表  5  不同損失函數(shù)的影響

    損失
    組合
    顏色損失 多層感知損失 對(duì)抗損失 FVSD損失 PSNR (dB)↑ SSIM↑
    Lcol Lcol-1 Ladv Ladv-1
    組合1 $ \times $ $\surd $ $\surd $ $ \times $ $\surd $ $ \times $ 28.352 0.818
    組合2 $ \times $ $\surd $ $\surd $ $ \times $ $\surd $ $\surd $ 28.831 0.835
    組合3 $\surd $ $ \times $ $\surd $ $\surd $ $ \times $ $ \times $ 28.437 0.821
    本文 $\surd $ $ \times $ $\surd $ $\surd $ $ \times $ $\surd $ 29.015 0.837
    下載: 導(dǎo)出CSV

    表  6  重構(gòu)時(shí)間與參數(shù)量的比較

    算法 重構(gòu)時(shí)間(ms) 參數(shù)量(MB)
    SRGAN[11] 0.0401 1.51
    ESRGAN[12] 0.1603 16.69
    SFTGAN[14] 0.0664 1.83
    DSGAN[17] 0.1723 16.69
    SRCGAN[13] 0.0096 0.38
    FxSR[15] 0.3541 18.30
    SROOE[16] 0.3880 70.20
    WGSR[19] 0.1806 16.69
    本文 0.1568 9.62
    下載: 導(dǎo)出CSV
  • [1] 蔡文郁, 張美燕, 吳巖, 等. 基于循環(huán)生成對(duì)抗網(wǎng)絡(luò)的超分辨率重建算法研究[J]. 電子與信息學(xué)報(bào), 2022, 44(1): 178–186. doi: 10.11999/JEIT201046.

    CAI Wenyu, ZHANG Meiyan, WU Yan, et al. Research on cyclic generation countermeasure network based super-resolution image reconstruction algorithm[J]. Journal of Electronics & Information Technology, 2022, 44(1): 178–186. doi: 10.11999/JEIT201046.
    [2] ZHOU Chaowei and XIONG Aimin. Fast image super-resolution using particle swarm optimization-based convolutional neural networks[J]. Sensors, 2023, 23(4): 1923. doi: 10.3390/s23041923.
    [3] WU Zhijian, LIU Wenhui, LI Jun, et al. SFHN: Spatial-frequency domain hybrid network for image super-resolution[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(11): 6459–6473. doi: 10.1109/TCSVT.2023.3271131.
    [4] 程德強(qiáng), 袁航, 錢建生, 等. 基于深層特征差異性網(wǎng)絡(luò)的圖像超分辨率算法[J]. 電子與信息學(xué)報(bào), 2024, 46(3): 1033–1042. doi: 10.11999/JEIT230179.

    CHENG Deqiang, YUAN Hang, QIAN Jiansheng, et al. Image super-resolution algorithms based on deep feature differentiation network[J]. Journal of Electronics & Information Technology, 2024, 46(3): 1033–1042. doi: 10.11999/JEIT230179.
    [5] SAHARIA C, HO J, CHAN W, et al. Image super-resolution via iterative refinement[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(4): 4713–4726. doi: 10.1109/TPAMI.2022.3204461.
    [6] DONG Chao, LOY C C, HE Kaiming, et al. Learning a deep convolutional network for image super-resolution[C]. The 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 184–199. doi: 10.1007/978-3-319-10593-2_13.
    [7] KIM J, LEE J K, and LEE K M. Accurate image super-resolution using very deep convolutional networks[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016. doi: 10.1109/CVPR.2016.182.
    [8] TONG Tong, LI Gen, LIU Xiejie, et al. Image super-resolution using dense skip connections[C]. 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 4809–4817. doi: 10.1109/ICCV.2017.514.
    [9] LAN Rushi, SUN Long, LIU Zhenbing, et al. MADNet: A fast and lightweight network for single-image super resolution[J]. IEEE Transactions on Cybernetics, 2021, 51(3): 1443–1453. doi: 10.1109/TCYB.2020.2970104.
    [10] WEI Pengxu, XIE Ziwei, LU Hannan, et al. Component divide-and-conquer for real-world image super-resolution[C]. The 16th Europe Conference on Computer Vision, Glasgow, UK, 2020: 101–117. doi: 10.1007/978-3-030-58598-3_7.
    [11] LEDIG C, THEIS L, HUSZáR F, et al. Photo-realistic single image super-resolution using a generative adversarial network[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 105–114. doi: 10.1109/CVPR.2017.19.
    [12] WANG Xintao, YU Ke, WU Shixiang, et al. ESRGAN: Enhanced super-resolution generative adversarial networks[C]. The European Conference on Computer Vision, Munich, Germany, 2019: 63–79. doi: 10.1007/978-3-030-11021-5_5.
    [13] UMER R M, FORESTI G L, and MICHELONI C. Deep generative adversarial residual convolutional networks for real-world super-resolution[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, USA, 2020: 1769–1777. doi: 10.1109/CVPRW50498.2020.00227.
    [14] WANG Xintao, YU Ke, DONG Chao, et al. Recovering realistic texture in image super-resolution by deep spatial feature transform[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 606–615. doi: 10.1109/CVPR.2018.00070.
    [15] PARK S H, MOON Y S, and CHO N I. Flexible style image super-resolution using conditional objective[J]. IEEE Access, 2022, 10: 9774–9792. doi: 10.1109/ACCESS.2022.3144406.
    [16] PARK S H, MOON Y S, and CHO N I. Perception-oriented single image super-resolution using optimal objective estimation[C]. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023: 1725–1735. doi: 10.1109/CVPR52729.2023.00172.
    [17] FRITSCHE M, GU Shuhang, and TIMOFTE R. Frequency separation for real-world super-resolution[C]. 2019 IEEE/CVF International Conference on Computer Vision Workshop, Seoul, Korea (South), 2019: 3599–3608. doi: 10.1109/ICCVW.2019.00445.
    [18] PRAJAPATI K, CHUDASAMA V, PATEL H, et al. Direct unsupervised super-resolution using generative adversarial network (DUS-GAN) for real-world data[J]. IEEE Transactions on Image Processing, 2021, 30: 8251–8264. doi: 10.1109/TIP.2021.3113783.
    [19] KORKMAZ C, TEKALP A M, and DOGAN Z. Training generative image super-resolution models by wavelet-domain losses enables better control of artifacts[C]. 2014 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2024: 5926–5936. doi: 10.1109/CVPR52733.2024.00566.
    [20] MA Chao, YANG C Y, YANG Xiaokang, et al. Learning a no-reference quality metric for single-image super-resolution[J]. Computer Vision and Image Understanding, 2017, 158: 1–16. doi: 10.1016/j.cviu.2016.12.009.
    [21] RONNEBERGER O, FISCHER P, and BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]. The 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015: 234–241. doi: 10.1007/978-3-319-24574-4_28.
    [22] YANG Jianchao, WRIGHT J, HUANG T S, et al. Image super-resolution via sparse representation[J]. IEEE Transactions on Image Processing, 2010, 19(11): 2861–2873. doi: 10.1109/TIP.2010.2050625.
    [23] ZHANG Kai, ZUO Wangmeng, and ZHANG Lei. Deep plug-and-play super-resolution for arbitrary blur kernels[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019. doi: 10.1109/CVPR.2019.00177.
    [24] TIMOFTE R, AGUSTSSON E, VAN GOOL L, et al. NTIRE 2017 challenge on single image super-resolution: Methods and results[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, USA, 2017: 114–125. doi: 10.1109/CVPRW.2017.149.
    [25] BEVILACQUA M, ROUMY A, GUILLEMOT C, et al. Low-complexity single image super-resolution based on nonnegative neighbor embedding[C]. The British Machine Vision Conference, 2012. doi: 10.5244/C.26.135.
    [26] ZEYDE R, ELAD M, and PROTTER M. On single image scale-up using sparse-representations[C]. The 7th International Conference on Curves and Surfaces, Avignon, France, 2012: 711–730. doi: 10.1007/978-3-642-27413-8_47.
    [27] ARBELáEZ P, MAIRE M, FOWLKES C, et al. Contour detection and hierarchical image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(5): 898–916. doi: 10.1109/tpami.2010.161.
    [28] MATSUI Y, ITO K, ARAMAKI Y, et al. Sketch-based manga retrieval using manga109 dataset[J]. Multimedia Tools and Applications, 2017, 76(20): 21811–21838. doi: 10.1007/s11042-016-4020-z.
  • 加載中
圖(9) / 表(6)
計(jì)量
  • 文章訪問(wèn)數(shù):  217
  • HTML全文瀏覽量:  125
  • PDF下載量:  38
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2024-05-16
  • 修回日期:  2024-11-09
  • 網(wǎng)絡(luò)出版日期:  2024-11-18
  • 刊出日期:  2024-12-01

目錄

    /

    返回文章
    返回