|
|
|
|
|
|
Improved Fusion Algorithm for Infrared and Visible Images Based on
Image Enhancement and Convolutional Sparse Representation |
ZHU Rong1, ZHENG Wan-bo1, 2, 3*, WANG Yao2, 3, TAN Chun-lin2, 3 |
1. Faculty of Science, Kunming University of Science and Technology, Kunming 650500, China
2. Faculty of Public Safety and Emergency Management, Kunming University of Science and Technology, Kunming 650093, China
3. Key Laboratory of Geological Hazard Risk Prevention and Emergency Mitigation, Emergency Management Department, Kunming University of Science and Technology, Kunming 650093, China
|
|
|
Abstract Infrared and visible light images have become important source images in the field of image fusion research due to their complementary characteristics, and the current infrared and visible light image fusion methods have the problem of limited ability to preserve the details of texture information in the image. In this paper, firstly, the histogram equalization (HE) method is used to dynamically expand the range of gray values of infrared and visible images after alignment to achieve image enhancement, which makes the texture information in the image more prominent, and at the same time, the contrast between the background of the image and the details of the texture is also improved. Secondly, the gradient minimization filter is used to smooth the enhanced image to obtain the background layer of the image, and then the source image and the background layer are used to obtain the detail layer by difference operation to realize the decomposition of the infrared and visible light images. Again, the convolutional sparse representation (CSR) is combined with feature similarity analysis for infrared and visible image fusion: the two detail layers containing rich texture information are fused using the fusion strategy based on the convolutional sparse representation, and in this process to reduce the mismatch sensitivity of the convolutional sparse representation method, a window-based averaging strategy is adopted to process the activity level map of the image, to make the convolutional sparse representation insensitive to mismatches; For the problem of large amount of redundant information in the background image, the feature similarity analysis of the two background layers is carried out, and this is used as the basis for determining the degree of importance of the two in the fusion process. Finally, the preliminary fused detail and background layers are reconstructed by the inverse transform of gradient minimization image decomposition, and the fusion results of infrared and visible light images are finally obtained. Two sets of images, scenes 1 (buildings) and 2 (woods), from 21 scenes in the TNO dataset are used for subjective visual analysis. The observation results show that the HE-CSR-based fusion method visually retains the image's texture details better than the existing eight typical image fusion methods, including CVT, DTCWT, FPDE, GTF, IFEVIP, LP, RP, and CSR. At the same time, the objective index evaluation of the image fusion effect of all scenes in the TNO dataset is further conducted. The results show that the six evaluation index values of SF, SD, SCD, AG, EN, and CC for the HE-CSR-based fusion results are 7.316 6, 37.350 5, 1.704 1, 5.571 4, 6.756 3, and 0.744 6, which are respectively improved by 19.54%, 21.87%, 13.11%, 31.31%, 2.17%, and 8.23%. The experimental results show that the HE-CSR fusion method proposed in this paper outperforms other typical methods in subjective visual analysis and objective index evaluation and provides a new and more effective model and method for infrared and visible image fusion.
|
Received: 2023-10-30
Accepted: 2024-05-30
|
|
Corresponding Authors:
ZHENG Wan-bo
E-mail: zwanbo2001@163.com
|
|
[1] ZHOU Pei-pei, HOU Xing-lin(周培培, 侯幸林). Journal of System Simulation(系统仿真学报), 2022, 34(6): 1267.
[2] LI Zi-tong, ZHAO Jian-kang, XU Jing-ran, et al(李紫桐, 赵健康, 徐静冉, 等). Acta Photonica Sinica(光子学报), 2023, 52(11): 1.
[3] YANG Chang-chun, YE Zan-ting, LIU Ban-teng, et al(杨长春, 叶赞挺, 刘半藤, 等). Journal of Zhejiang University (Engineering Science)[浙江大学学报(工学版)], 2023, 57(2): 226.
[4] LIU Gang, XU Lin-feng(刘 刚,许林峰). Control and Decision(控制与决策), 2010, 25(4): 623.
[5] Liu J, Fan X, Jiang J, et al. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(1): 105.
[6] Zhou Z, Wang B, Li S, et al. Information Fusion, 2016, 30: 15.
[7] Ma J, Zhou Z, Wang B, et al. Infrared Physics & Technology, 2017, 82: 8.
[8] Li J, Huo H, Li C, et al. IEEE Transactions on Multimedia, 2021, 23: 1383.
[9] Ma J, Zhang H, Shao Z, et al. IEEE Transactions on Instrumentation and Measurement, 2020, 70: 1.
[10] ZHANG Zhou-yu, CAO Yun-feng, DING Meng, et al(张洲宇, 曹云峰, 丁 萌, 等). Journal of Harbin Institute of Technology(哈尔滨工业大学学报), 2021, 53(12): 51.
[11] YANG Yan-chun, LI Xiao-miao, DANG Jian-wu, et al(杨艳春, 李小苗, 党建武, 等). Journal of Beijing University of Aeronautics and Astronautics(北京航空航天大学学报), 2023, 49(9): 2317.
[12] XU Shao-ping, CHEN Xiao-jun, LUO Jie, et al(徐少平, 陈晓军, 罗 洁, 等). Pattern Recognition and Artificial Intelligence(模式识别与人工智能), 2022, 35(12): 1089.
[13] CHEN Yong, ZHANG Jiao-jiao, WANG Zhen(陈 永, 张娇娇, 王 镇). Optics and Precision Engineering(光学精密工程), 2022, 30(18): 2253.
[14] Li G, Lin Y, Qu X. Information Fusion, 2021, 71: 109.
[15] MIN Li, TIAN Lin-lin, ZHAO Huai-ci, et al(闵 莉, 田林林, 赵怀慈, 等). Control and Decision(控制与决策), 2024, 39(1): 227.
[16] FENG Xin, FANG Chao, GONG Hai-feng, et al(冯 鑫, 方 超, 龚海峰, 等). Spectroscopy and Spectral Analysis(光谱学与光谱分析), 2023, 43(2): 590.
[17] LI Yan-feng, LIU Ming-yang, HU Jia-ming, et al(李延风, 刘名扬, 胡嘉明, 等). Journal of Jilin University (Engineering and Technology Edition)[吉林大学学报(工学版)], 2024, 54(6): 1777.
[18] YANG Pei, GAO Lei-fu, ZI Ling-ling(杨 培, 高雷阜, 訾玲玲). Journal of Image and Graphics(中国图象图形学报), 2021, 26(10): 2433.
[19] PANG Zhong-xiang, LIU Gui-hua, CHEN Chun-mei, et al(庞忠祥, 刘桂华, 陈春梅, 等). Control and Decision(控制与决策), 2024, 39(3): 910.
[20] Nencini F, Garzelli A, Baronti S, et al. Information Fusion, 2007, 8(2): 143.
[21] Lewis J J, O'Callaghan R J, Nikolov S G, et al. Information Fusion, 2007, 8(2): 119.
[22] GAO Xue-qin, LIU Gang, XIAO Gang, et al(高雪琴, 刘 刚, 肖 刚, 等). Acta Automatica Sinica(自动化学报), 2020, 46(4): 796.
[23] Yu R, Chen W, Zhou D. IEEE Access, 2020, 8: 50091.
[24] Zhang Y, Zhang L, Bai X, et al. Infrared Physics & Technology, 2017, 83: 227.
[25] Ma J, Ma Y, Li C. Information Fusion, 2018, 45: 153.
[26] Toet A. Pattern Recognition Letters, 1989, 9(4): 245.
[27] Liu Y, Chen X, Ward R K, et al. IEEE Signal Processing Letters, 2016, 23(12): 1882.
[28] HE Zhi-bo, ZENG Xiang-jin, DENG Chen, et al(何智博, 曾祥进, 邓 晨, 等). Infrared Technology(红外技术), 2023, 45(6): 598.
[29] Ren Y, Li Z, Xu C. Mathematics, 2023, 11(17): 3689.
[30] Toet A. Data in Brief, 2017, 15: 249.
[31] ZHU Wen-qing, TANG Xin-yi, ZHANG Rui, et al(朱雯青, 汤心溢, 张 瑞, 等). Journal of Infrared and Millimeter Waves(红外与毫米波学报), 2021, 40(5): 696.
[32] TANG Lin-feng, ZHANG Hao, XU Han, et al(唐霖峰, 张 浩, 徐 涵, 等). Journal of Image and Graphics(中国图象图形学报), 2023, 28(1): 3.
|
[1] |
JI Jing-yu, ZHANG Yu-hua, XING Na, WANG Chang-long, LIN Zhi-long*, YAO Jiang-yi. Three-Scale Deconstruction and Sparse Representation of Infrared and Visible Image Fusion[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2024, 44(05): 1425-1438. |
[2] |
LI Si-yuan, JIAO Jian-nan, WANG Chi*. Specular Reflection Removal Method Based on Polarization Spectrum
Fusion and Its Application in Vegetation Health Monitoring[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2023, 43(11): 3607-3614. |
[3] |
ZHU Wen-qing1, 2, 3, ZHANG Ning1, 2, 3, LI Zheng1, 2, 3*, LIU Peng1, 3, TANG Xin-yi1, 3. A Multi-Task Convolutional Neural Network for Infrared and Visible Multi-Resolution Image Fusion[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2023, 43(01): 289-296. |
[4] |
XU Xue-bin1, 2, XING Xiao-min1, 2*, AN Mei-juan1, 2, CAO Shu-xin1, 2, MENG Kan1, 2, LU Long-bin1, 2. Palmprint Recognition Method Based on Multispectral Image Fusion[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2022, 42(11): 3615-3625. |
[5] |
CUI Xiao-rong, SHEN Tao*, HUANG Jian-lu, SUN Bin-bin. Infrared Mid-Wave and Long-Wave Image Fusion Based on FABEMD and Improved Local Energy Window[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2021, 41(07): 2043-2049. |
[6] |
SHEN Yu, YUAN Yu-bin*, PENG Jing. Research on Near Infrared and Color Visible Fusion Based on PCNN in Transform Domain[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2021, 41(07): 2023-2027. |
[7] |
ZHANG Jin1, WANG Jie1, SHEN Yan3, ZHANG Jin-bo4, CUI Hong-liang1,2*, SHI Chang-cheng2*. Wavelet-Based Image Fusion Method Applied in the Terahertz Nondestructive Evaluation[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2017, 37(12): 3683-3688. |
[8] |
LIU Feng1, SHEN Tong-sheng2, GUO Shao-jun1,ZHANG Jian3. Multi-Spectral Ship Target Recognition Based on Feature Level Fusion[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2017, 37(06): 1934-1940. |
[9] |
LIU Jia-ni, JIN Wei-qi*, LI Li, WANG Xia . Visible and Infrared Thermal Image Fusion Algorithm Based on Self-Adaptive Reference Image [J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2016, 36(12): 3907-3914. |
[10] |
LIN Su-zhen, WANG Dong-juan, WANG Xiao-xia, ZHU Xiao-hong. Multi-Band Texture Image Fusion Based on the Embedded Multi-Scale Decomposition and Possibility Theory[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2016, 36(07): 2337-2343. |
[11] |
LIN Su-zhen, YANG Feng-bao, CHEN Lei . Fusion of Dual Color MWIR Images Based on Support Value Transform and top-hat Decomposition [J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2014, 34(04): 1144-1150. |
[12] |
SHEN Yu1, DANG Jian-wu1, FENG Xin2, WANG Yang-ping1, HOU Yue1 . Infrared and Visible Images Fusion Based on Tetrolet Transform [J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2013, 33(06): 1506-1511. |
[13] |
DOU Wen1, SUN Hong-quan2, CHEN Yun-hao2* . Comparison among Remotely Sensed Image Fusion Methods Based on Spectral Response Function[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2011, 31(03): 746-752. |
[14] |
ZHANG Guo-kun1,2,CHEN Chun1,XING Fu3,ZHANG Hong-yan1*,ZHAO Yun-sheng1 . Spectral Radiometric Calibration Research of Quick Bird Digital Image[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2008, 28(03): 494-498. |
|
|
|
|