夜視抗暈光融合圖像自適應(yīng)分區(qū)質(zhì)量評價(jià)
doi: 10.11999/JEIT190453 cstr: 32379.14.JEIT190453
-
西安工業(yè)大學(xué)電子信息工程學(xué)院 西安 710021
Quality Evaluation of Night Vision Anti-halation Fusion Image Based on Adaptive Partition
-
School of Electronic Information Engineering, Xi’an Technological University, Xi’an 710021, China
-
摘要:
針對夜視暈光場景中,高亮度暈光信息導(dǎo)致現(xiàn)有紅外與可見光融合圖像評價(jià)方法失效的問題,該文提出一種自適應(yīng)分區(qū)的融合圖像質(zhì)量評價(jià)方法。該方法根據(jù)可見光圖像的暈光程度自動確定自適應(yīng)系數(shù),并通過迭代計(jì)算可見光灰度圖像的暈光臨界灰度值,將融合圖像自動分為多個暈光區(qū)和非暈光區(qū);在暈光區(qū)由設(shè)計(jì)的暈光消除度指標(biāo)評價(jià)融合圖像的暈光消除效果;在非暈光區(qū)從融合圖像自身特性、對原始圖像信息保留程度以及人眼視覺效果3方面評價(jià)融合圖像紋理色彩等細(xì)節(jié)信息的增強(qiáng)效果;通過對4種不同抗暈光算法的融合圖像進(jìn)行評價(jià)分析,甄選出9種客觀評價(jià)指標(biāo)構(gòu)成夜視抗暈光融合圖像質(zhì)量評價(jià)體系。不同夜視暈光場景下的實(shí)驗(yàn)結(jié)果表明,所提方法能夠全面、合理地評價(jià)紅外與可見光融合的抗暈光圖像質(zhì)量,解決了融合圖像暈光消除越徹底客觀評價(jià)結(jié)果反而越差的問題,也適于評判不同抗暈光融合算法的優(yōu)劣。
-
關(guān)鍵詞:
- 融合圖像 /
- 圖像質(zhì)量評價(jià) /
- 夜視抗暈光 /
- 自適應(yīng)分區(qū)
Abstract:To solve the failure of existing evaluation methods of infrared and visible fusion image caused by high brightness halation information in night vision halation scene, a novel fusion image quality evaluation method based on adaptive partition is proposed. In this method, the adaptive coefficient is automatically determined according to the halation degree of visible image, and then, the fusion image is divided into halo regions and non-halo region by iterative calculation of the critical halation gray value. In the halo region, the effectiveness of halation elimination is evaluated by halation elimination index designed, while in the non-halo region, the enhancement effect of detailed information such as texture and color is evaluated from three aspects including: characteristics of fusion image itself, retention degree of original image information and human visual effect. Based on evaluation and analysis of fusion images obtained by 4 different anti-halation algorithms, nine objective indexes are selected to construct a quality evaluation system of night vision anti-halation fused image. Experimental results in different night vision halation scenes show that the proposed method could evaluate anti-halation image quality of infrared and visible fusion comprehensively and reasonably, and could solve the problem that the more thorough halation elimination of fusion image, the worse objective evaluation results. This method could also be suitable for evaluating merits and demerits of different anti-halation fusion algorithms.
-
Key words:
- Fusion image /
- Image quality evaluation /
- Night vision anti-halation /
- Adaptive partition
-
表 1 曲線擬合優(yōu)度
曲線 SSE RMSE R2 基線 0.0481 0.0310 0.9487 上界 0.0293 0.0318 0.9566 下界 0.0304 0.0313 0.9559 最優(yōu) 0.0042 0.0174 0.9910 下載: 導(dǎo)出CSV
表 3 無參考圖像客觀評價(jià)指標(biāo)
算法 非暈光區(qū)融合圖像 未分區(qū)融合圖像 μ σ E AG EI SF μ σ E AG EI SF IHS 55.5114 23.7201 4.7715 1.6149 1.3009 9.1389 102.1417 25.9308 5.7862 3.9899 14.0576 10.1062 曲波 66.4604 28.0101 4.7845 3.5071 4.1962 14.5914 105.3180 38.7324 6.8762 6.8807 18.7757 21.1077 IHS-曲波 72.6815 30.0118 5.4760 3.7987 7.3702 17.1324 106.8972 39.4403 7.0812 7.7245 20.8070 22.9812 改進(jìn)IHS-曲波 94.8522 30.7021 6.0882 4.3367 7.3808 19.3482 104.9308 38.4334 6.6463 6.3063 15.0324 19.3287 下載: 導(dǎo)出CSV
表 4 全參考圖像客觀評價(jià)指標(biāo)
算法 CEFU-VI MIFU-VI RMSEFU-VI PSNRFU-VI CEFU-IR MIFU-IR RMSEFU-IR PSNRFU-IR IHS 0.9961 1.1810 30.9468 58.7218 0.9831 1.0933 30.8392 60.3349 曲波 0.4655 1.9853 27.7219 63.7648 0.6850 1.6314 29.7961 65.5507 IHS-曲波 0.3018 2.5135 26.8683 64.9261 0.5247 3.1821 25.9617 67.8470 改進(jìn) IHS-曲波 0.2051 3.0012 23.7003 65.9410 0.3289 4.8819 24.9118 68.2431 下載: 導(dǎo)出CSV
表 5 視覺系統(tǒng)的客觀評價(jià)指標(biāo)
算法 SSIMFU-VI SSIMFU-IR QAB/F IHS 0.5792 0.6004 0.3361 曲波 0.6632 0.7443 0.4048 IHS-曲波 0.6732 0.7516 0.4539 改進(jìn)IHS-曲波 0.6761 0.7611 0.5740 下載: 導(dǎo)出CSV
-
M?RSELL E, BOSTR?M E, HARTH A, et al. Spatial control of multiphoton electron excitations in InAs nanowires by varying crystal phase and light polarization[J]. Nano Letters, 2018, 18(2): 907–915. doi: 10.1021/acs.nanolett.7b04267 朱美萍, 孫建, 張偉麗, 等. 高性能偏振膜的研制[J]. 光學(xué) 精密工程, 2016, 24(12): 2908–2915. doi: 10.3788/OPE.20162412.2908ZHU Meiping, SUN Jian, ZHANG Weili, et al. Development of high performance polarizer coatings[J]. Optics and Precision Engineering, 2016, 24(12): 2908–2915. doi: 10.3788/OPE.20162412.2908 CHRZANOWSKI K. Review of night vision technology[J]. Opto-Electronics Review, 2013, 21(2): 153–181. doi: 10.2478/s11772-013-0089-3 KWAK J Y, KO B C, and NAM J Y. Pedestrian tracking using online boosted random ferns learning in far -infrared imagery for safe driving at night[J]. IEEE Transactions on Intelligent Transportation System, 2017, 18(1): 69–81. doi: 10.1109/TITS.2016.2569159 JEONG M R, KWAK J Y, SON J E, et al. Fast pedestrian detection using a night vision system for safety driving[C]. The 11th International Conference on Computer Graphics, Imaging and Visualization, Singapore, 2014: 69–72. doi: 10.1109/CGiV.2014.25. BOSIERS J T, KLEIMANN A C, VAN KUIJK H C, et al. Frame transfer CCDs for digital still cameras: Concept, design, and evaluation[J]. IEEE Transactions on Electron Devices, 2002, 49(3): 377–386. doi: 10.1109/16.987106 王健, 高勇, 雷志勇, 等. 基于雙CCD圖像傳感器的汽車抗暈光方法研究[J]. 傳感技術(shù)學(xué)報(bào), 2007, 20(5): 1053–1056. doi: 10.3969/j.issn.1004-1699.2007.05.023WANG Jian, GAO Yong, LEI Zhiyong, et al. Research of auto anti-blooming method based on double CCD image sensor[J]. Chinese Journal of Sensors and Actuators, 2007, 20(5): 1053–1056. doi: 10.3969/j.issn.1004-1699.2007.05.023 GUO Quanmin and LI Xiaoling. Car anti-blooming method based on visible and infrared image fusion[J]. Naukovyi Visnyk Natsionalnoho Hirnychoho Universytetu, 2015(4): 115–121. HU Haimiao, WU Jiawei, LI Bo, et al. An adaptive fusion algorithm for visible and infrared videos based on entropy and the cumulative distribution of gray levels[J]. IEEE Transactions on Multimedia, 2017, 19(12): 2706–2719. doi: 10.1109/TMM.2017.2711422 QIAO Tiezhu, CHEN Lulu, PANG Yusong, et al. Integrative multi-spectral sensor device for far-infrared and visible light fusion[J]. Photonic Sensors, 2018, 8(2): 134–145. doi: 10.1007/s13320-018-0401-4 陳清江, 張彥博, 柴昱洲, 等. 有限離散剪切波域的紅外可見光圖像融合[J]. 中國光學(xué), 2016, 9(5): 523–531. doi: 10.3788/CO.20160905.0523CHEN Qingjiang, ZHANG Yanbo, CHAI Yuzhou, et al. Fusion of infrared and visible images based on finite discrete shearlet domain[J]. Chinese Optics, 2016, 9(5): 523–531. doi: 10.3788/CO.20160905.0523 江澤濤, 吳輝, 周嘵玲. 基于改進(jìn)引導(dǎo)濾波和雙通道脈沖發(fā)放皮層模型的紅外與可見光圖像融合算法[J]. 光學(xué)學(xué)報(bào), 2018, 38(2): 0210002. doi: 10.3788/AOS201838.0210002JIANG Zetao, WU Hui, and ZHOU Xiaoling. Infrared and visible image fusion algorithm based on improved guided filtering and dual-channel spiking cortical model[J]. Acta Optica Sinica, 2018, 38(2): 0210002. doi: 10.3788/AOS201838.0210002 LI Leida, XIA Wenhan, LIN Weisi, et al. No-reference and robust image sharpness evaluation based on multiscale spatial and spectral features[J]. IEEE Transactions on Multimedia, 2017, 19(5): 1030–1040. doi: 10.1109/TMM.2016.2640762 JAIN A and BHATEJA V. A full-reference image quality metric for objective evaluation in spatial domain[C]. 2011 International Conference on Communication and Industrial Application, Kolkata, India, 2011. doi: 10.1109/ICCIndA.2011.6146668. CHEN Guo, LI Li, JIN Weiqi, et al. Image contrast enhancement method based on display and human visual system characteristics[J]. Applied Optics, 2019, 58(7): 1813–1823. doi: 10.1364/AO.58.001813 XU Hailong, CHEN Yong, GU Dexian, et al. Evaluating goodness-of-fit in comparison of different expressions for length-weight relationship in fishery resources[J]. Applied Mechanics and Materials, 2014, 651-653: 337–343. doi: 10.4028/www.scientific.net/AMM.651-653.337 徐正光, 鮑東來, 張利欣. 基于遞歸的二值圖像連通域像素標(biāo)記算法[J]. 計(jì)算機(jī)工程, 2006, 32(24): 186–188, 225. doi: 10.3969/j.issn.1000-3428.2006.24.067XU Zhengguang, BAO Donglai, and ZHANG Lixin. Pixel labeled algorithm based on recursive method of connecting area in binary images[J]. Computer Engineering, 2006, 32(24): 186–188, 225. doi: 10.3969/j.issn.1000-3428.2006.24.067 葉盛楠, 蘇開娜, 肖創(chuàng)柏, 等. 基于結(jié)構(gòu)信息提取的圖像質(zhì)量評價(jià)[J]. 電子學(xué)報(bào), 2008, 36(5): 856–861. doi: 10.3321/j.issn:0372-2112.2008.05.005YE Shengnan, SU Kaina, XIAO Chuagbai, et al. Image quality assessment based on structural information extraction[J]. Acta Electronica Sinica, 2008, 36(5): 856–861. doi: 10.3321/j.issn:0372-2112.2008.05.005 郭全民, 王言, 李翰山. 改進(jìn)IHS-Curvelet變換融合可見光與紅外圖像抗暈光方法[J]. 紅外與激光工程, 2018, 47(11): 1126002. doi: 10.3788/IRLA201847.1126002GUO Quanmin, WANG Yan, and LI Hanshan. Anti-halation method of visible and infrared image fusion based on improved IHS-Curvelet transform[J]. Infrared and Laser Engineering, 2018, 47(11): 1126002. doi: 10.3788/IRLA201847.1126002 郭全民, 董亮, 李代娣. 紅外與可見光圖像融合的汽車抗暈光系統(tǒng)[J]. 紅外與激光工程, 2017, 46(8): 0818005. doi: 10.3788/IRLA201746.0818005GUO Quanmin, DONG Liang, and LI Daidi. Vehicles anti- halation system based on infrared and visible images fusion[J]. Infrared and Laser Engineering, 2017, 46(8): 0818005. doi: 10.3788/IRLA201746.0818005 YU Tianshu and WANG Ruisheng. Scene parsing using graph matching on street- view data[J]. Computer Vision and Image Understanding, 2016, 145: 70–80. doi: 10.1016/j.cviu.2016.01.004 -