一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機號碼
標(biāo)題
留言內(nèi)容
驗證碼

基于多尺度生成對抗網(wǎng)絡(luò)的運動散焦紅外圖像復(fù)原

易詩 吳志娟 朱競銘 李欣榮 袁學(xué)松

易詩, 吳志娟, 朱競銘, 李欣榮, 袁學(xué)松. 基于多尺度生成對抗網(wǎng)絡(luò)的運動散焦紅外圖像復(fù)原[J]. 電子與信息學(xué)報, 2020, 42(7): 1766-1773. doi: 10.11999/JEIT190495
引用本文: 易詩, 吳志娟, 朱競銘, 李欣榮, 袁學(xué)松. 基于多尺度生成對抗網(wǎng)絡(luò)的運動散焦紅外圖像復(fù)原[J]. 電子與信息學(xué)報, 2020, 42(7): 1766-1773. doi: 10.11999/JEIT190495
Shi YI, Zhijuan WU, Jingming ZHU, Xinrong LI, Xuesong YUAN. Motion Defocus Infrared Image Restoration Based on Multi Scale Generative Adversarial Network[J]. Journal of Electronics & Information Technology, 2020, 42(7): 1766-1773. doi: 10.11999/JEIT190495
Citation: Shi YI, Zhijuan WU, Jingming ZHU, Xinrong LI, Xuesong YUAN. Motion Defocus Infrared Image Restoration Based on Multi Scale Generative Adversarial Network[J]. Journal of Electronics & Information Technology, 2020, 42(7): 1766-1773. doi: 10.11999/JEIT190495

基于多尺度生成對抗網(wǎng)絡(luò)的運動散焦紅外圖像復(fù)原

doi: 10.11999/JEIT190495 cstr: 32379.14.JEIT190495
基金項目: 國家自然科學(xué)基金(61771096)
詳細(xì)信息
    作者簡介:

    易詩:男,1983年生,高級實驗師,研究方向為深度學(xué)習(xí)、紅外圖像處理

    袁學(xué)松:男,1980年生,教授,研究方向為太赫茲成像技術(shù)、信號與信息處理

    通訊作者:

    易詩 549745481@qq.com

  • 中圖分類號: TN911.73

Motion Defocus Infrared Image Restoration Based on Multi Scale Generative Adversarial Network

Funds: The National Natural Science Foundation of China (61771096)
  • 摘要:

    紅外熱成像系統(tǒng)在夜間實施目標(biāo)識別與檢測優(yōu)勢明顯,而移動平臺上動態(tài)環(huán)境所導(dǎo)致的運動散焦模糊影響上述成像系統(tǒng)的應(yīng)用。該文針對上述問題,基于生成對抗網(wǎng)絡(luò)開展運動散焦后紅外圖像復(fù)原方法研究,采用生成對抗網(wǎng)絡(luò)抑制紅外圖像的運動散焦模糊,提出一種針對紅外圖像的多尺度生成對抗網(wǎng)絡(luò)(IMdeblurGAN)在高效抑制紅外圖像運動散焦模糊的同時保持紅外圖像細(xì)節(jié)對比度,提升移動平臺上夜間目標(biāo)的檢測與識別能力。實驗結(jié)果表明:該方法相對已有最優(yōu)模糊圖像復(fù)原方法,圖像峰值信噪比(PSNR)提升5%,圖像結(jié)構(gòu)相似性(SSIMx)提升4%,目標(biāo)識別YOLO置信度評分提升6%。

  • 圖  1  散焦模糊軌跡

    圖  2  生成對抗網(wǎng)絡(luò)還原運動散焦紅外圖像

    圖  3  生成網(wǎng)絡(luò)結(jié)構(gòu)

    圖  4  判別網(wǎng)絡(luò)結(jié)構(gòu)

    圖  5  復(fù)原效果對比

    圖  6  復(fù)原細(xì)節(jié)對比測試

    圖  7  YOLOV3置信度對比

    表  1  復(fù)原性能對比分析

    復(fù)原方法平均峰值信噪比 (dB)平均結(jié)構(gòu)相似性
    Wiener21.30.62
    LR22.50.65
    DeblurGAN27.00.75
    SRN-DeblurNet30.50.88
    本文IMdeblurGAN32.00.92
    下載: 導(dǎo)出CSV

    表  2  YOLO V3置信度對比分析

    原始圖像運動散焦圖像Wiener逆濾波LR迭代去卷積DeblurGANSRN-DeblurNet本文IMdeblurGAN
    YOLOV3 評分0.97不能識別不能識別不能識別0.770.890.95
    下載: 導(dǎo)出CSV
  • 崔美玉. 論紅外熱像儀的應(yīng)用領(lǐng)域及技術(shù)特點[J]. 中國安防, 2014(12): 90–93. doi: 10.3969/j.issn.1673-7873.2014.12.026

    CUI Meiyu. On the application field and technical characteristics of infrared thermal imager[J]. China Security &Protection, 2014(12): 90–93. doi: 10.3969/j.issn.1673-7873.2014.12.026
    KUPYN O, BUDZAN V, MYKHAILYCH M, et al. DeblurGAN: Blind motion deblurring using conditional adversarial networks[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 8183–8192.
    TAO Xin, GAO Hongyun, SHEN Xiaoyong, et al. Scale-recurrent network for deep image deblurring[C]. The 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 8174–8182.
    HE Zewei, CAO Yanpeng, DONG Yafei, et al. Single-image-based nonuniformity correction of uncooled long-wave infrared detectors: A deep-learning approach[J]. Applied Optics, 2018, 57(18): D155–D164. doi: 10.1364/AO.57.00D155
    邵保泰, 湯心溢, 金璐, 等. 基于生成對抗網(wǎng)絡(luò)的單幀紅外圖像超分辨算法[J]. 紅外與毫米波學(xué)報, 2018, 37(4): 427–432. doi: 10.11972/j.issn.1001-9014.2018.04.009

    SHAO Baotai, TANG Xinyi, JIN Lu, et al. Single frame infrared image super-resolution algorithm based on generative adversarial nets[J]. Journal of Infrared and Millimeter Wave, 2018, 37(4): 427–432. doi: 10.11972/j.issn.1001-9014.2018.04.009
    劉鵬飛, 趙懷慈, 曹飛道. 多尺度卷積神經(jīng)網(wǎng)絡(luò)的噪聲模糊圖像盲復(fù)原[J]. 紅外與激光工程, 2019, 48(4): 0426001. doi: 10.3788/IRLA201948.0426001

    LIU Pengfei, ZHAO Huaici, and CAO Feidao. Blind deblurring of noisy and blurry images of multi-scale convolutional neural network[J]. Infrared and Laser Engineering, 2019, 48(4): 0426001. doi: 10.3788/IRLA201948.0426001
    BOUSMALIS K, SILBERMAN N, DOHAN D, et al. Unsupervised pixel-level domain adaptation with generative adversarial networks[C]. The 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 95–104.
    李凌霄, 馮華君, 趙巨峰, 等. 紅外焦平面陣列的盲元自適應(yīng)快速校正[J]. 光學(xué)精密工程, 2017, 25(4): 1009–1018. doi: 10.3788/OPE.20172504.1009

    LI Lingxiao, FENG Huajun, ZHAO Jufeng, et al. Adaptive and fast blind pixel correction of IRFPA[J]. Optics and Precision Engineering, 2017, 25(4): 1009–1018. doi: 10.3788/OPE.20172504.1009
    DONG Chao, LOY C, HE Kaiming, et al. Learning a deep convolutional network for image super-resolution[C]. The 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 184–199.
    EFRAT N, GLASNER D, APARTSIN A, et al. Accurate blur models vs. image priors in single image super-resolution[C]. The 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 2013: 2832–2839.
    HE Anfeng, LUO Chong, TIAN Xinmei, et al. A twofold Siamese network for real-time object tracking[C]. The 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 4834–4843.
    LIN Zhouchen and SHUM H Y. Fundamental limits of reconstruction-based superresolution algorithms under local translation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(1): 83–97. doi: 10.1109/TPAMI.2004.1261081
    楊陽, 楊靜宇. 基于顯著性分割的紅外行人檢測[J]. 南京理工大學(xué)學(xué)報, 2013, 37(2): 251–256.

    YANG Yang and YANG Jingyu. Infrared pedestrian detection based on saliency segmentation[J]. Journal of Nanjing University of Science and Technology, 2013, 37(2): 251–256.
    PINNEGAR C R and MANSINHA L. Time-local spectral analysis for non-stationary time series: The S-transform for noisy signals[J]. Fluctuation and Noise Letters, 2003, 3(3): L357–L364. doi: 10.1142/S0219477503001439
    CAO Yanpeng and TISSE C L. Single-image-based solution for optics temperature-dependent nonuniformity correction in an uncooled long-wave infrared camera[J]. Optics Letters, 2014, 39(3): 646–648. doi: 10.1364/OL.39.000646
    REAL E, SHLENS J, MAZZOCCHI S, et al. YouTube-boundingboxes: A large high-precision human-annotated data set for object detection in video[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 7464–7473.
    WU Yi, LIM J, and YANG M H. Online object tracking: A benchmark[C]. The 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, USA, 2013: 2411–2418. doi: 10.1109/CVPR.2013.312.
    WANG Zhou, BOVIK A C, SHEIKH H R, et al. Image quality assessment: From error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600–612. doi: 10.1109/TIP.2003.819861
    KIM J, LEE J K, and LEE K M. Accurate image super-resolution using very deep convolutional networks[C]. The 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 1646–1654. doi: 10.1109/CVPR.2016.182.
    KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84–90. doi: 10.1145/3065386
  • 加載中
圖(7) / 表(2)
計量
  • 文章訪問數(shù):  3100
  • HTML全文瀏覽量:  1458
  • PDF下載量:  120
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2019-07-03
  • 修回日期:  2020-01-22
  • 網(wǎng)絡(luò)出版日期:  2020-03-25
  • 刊出日期:  2020-07-23

目錄

    /

    返回文章
    返回