Improved Fusion Algorithm for Infrared and Visible Images Based on
Image Enhancement and Convolutional Sparse Representation
ZHU Rong1, ZHENG Wan-bo1, 2, 3*, WANG Yao2, 3, TAN Chun-lin2, 3
1. Faculty of Science, Kunming University of Science and Technology, Kunming 650500, China
2. Faculty of Public Safety and Emergency Management, Kunming University of Science and Technology, Kunming 650093, China
3. Key Laboratory of Geological Hazard Risk Prevention and Emergency Mitigation, Emergency Management Department, Kunming University of Science and Technology, Kunming 650093, China
Abstract:Infrared and visible light images have become important source images in the field of image fusion research due to their complementary characteristics, and the current infrared and visible light image fusion methods have the problem of limited ability to preserve the details of texture information in the image. In this paper, firstly, the histogram equalization (HE) method is used to dynamically expand the range of gray values of infrared and visible images after alignment to achieve image enhancement, which makes the texture information in the image more prominent, and at the same time, the contrast between the background of the image and the details of the texture is also improved. Secondly, the gradient minimization filter is used to smooth the enhanced image to obtain the background layer of the image, and then the source image and the background layer are used to obtain the detail layer by difference operation to realize the decomposition of the infrared and visible light images. Again, the convolutional sparse representation (CSR) is combined with feature similarity analysis for infrared and visible image fusion: the two detail layers containing rich texture information are fused using the fusion strategy based on the convolutional sparse representation, and in this process to reduce the mismatch sensitivity of the convolutional sparse representation method, a window-based averaging strategy is adopted to process the activity level map of the image, to make the convolutional sparse representation insensitive to mismatches; For the problem of large amount of redundant information in the background image, the feature similarity analysis of the two background layers is carried out, and this is used as the basis for determining the degree of importance of the two in the fusion process. Finally, the preliminary fused detail and background layers are reconstructed by the inverse transform of gradient minimization image decomposition, and the fusion results of infrared and visible light images are finally obtained. Two sets of images, scenes 1 (buildings) and 2 (woods), from 21 scenes in the TNO dataset are used for subjective visual analysis. The observation results show that the HE-CSR-based fusion method visually retains the image's texture details better than the existing eight typical image fusion methods, including CVT, DTCWT, FPDE, GTF, IFEVIP, LP, RP, and CSR. At the same time, the objective index evaluation of the image fusion effect of all scenes in the TNO dataset is further conducted. The results show that the six evaluation index values of SF, SD, SCD, AG, EN, and CC for the HE-CSR-based fusion results are 7.316 6, 37.350 5, 1.704 1, 5.571 4, 6.756 3, and 0.744 6, which are respectively improved by 19.54%, 21.87%, 13.11%, 31.31%, 2.17%, and 8.23%. The experimental results show that the HE-CSR fusion method proposed in this paper outperforms other typical methods in subjective visual analysis and objective index evaluation and provides a new and more effective model and method for infrared and visible image fusion.
朱 榕,郑万波,王 耀,谭春琳. 基于HE-CSR的红外与可见光图像改进融合方法[J]. 光谱学与光谱分析, 2025, 45(02): 558-568.
ZHU Rong, ZHENG Wan-bo, WANG Yao, TAN Chun-lin. Improved Fusion Algorithm for Infrared and Visible Images Based on
Image Enhancement and Convolutional Sparse Representation. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2025, 45(02): 558-568.
[1] ZHOU Pei-pei, HOU Xing-lin(周培培, 侯幸林). Journal of System Simulation(系统仿真学报), 2022, 34(6): 1267.
[2] LI Zi-tong, ZHAO Jian-kang, XU Jing-ran, et al(李紫桐, 赵健康, 徐静冉, 等). Acta Photonica Sinica(光子学报), 2023, 52(11): 1.
[3] YANG Chang-chun, YE Zan-ting, LIU Ban-teng, et al(杨长春, 叶赞挺, 刘半藤, 等). Journal of Zhejiang University (Engineering Science)[浙江大学学报(工学版)], 2023, 57(2): 226.
[4] LIU Gang, XU Lin-feng(刘 刚,许林峰). Control and Decision(控制与决策), 2010, 25(4): 623.
[5] Liu J, Fan X, Jiang J, et al. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(1): 105.
[6] Zhou Z, Wang B, Li S, et al. Information Fusion, 2016, 30: 15.
[7] Ma J, Zhou Z, Wang B, et al. Infrared Physics & Technology, 2017, 82: 8.
[8] Li J, Huo H, Li C, et al. IEEE Transactions on Multimedia, 2021, 23: 1383.
[9] Ma J, Zhang H, Shao Z, et al. IEEE Transactions on Instrumentation and Measurement, 2020, 70: 1.
[10] ZHANG Zhou-yu, CAO Yun-feng, DING Meng, et al(张洲宇, 曹云峰, 丁 萌, 等). Journal of Harbin Institute of Technology(哈尔滨工业大学学报), 2021, 53(12): 51.
[11] YANG Yan-chun, LI Xiao-miao, DANG Jian-wu, et al(杨艳春, 李小苗, 党建武, 等). Journal of Beijing University of Aeronautics and Astronautics(北京航空航天大学学报), 2023, 49(9): 2317.
[12] XU Shao-ping, CHEN Xiao-jun, LUO Jie, et al(徐少平, 陈晓军, 罗 洁, 等). Pattern Recognition and Artificial Intelligence(模式识别与人工智能), 2022, 35(12): 1089.
[13] CHEN Yong, ZHANG Jiao-jiao, WANG Zhen(陈 永, 张娇娇, 王 镇). Optics and Precision Engineering(光学精密工程), 2022, 30(18): 2253.
[14] Li G, Lin Y, Qu X. Information Fusion, 2021, 71: 109.
[15] MIN Li, TIAN Lin-lin, ZHAO Huai-ci, et al(闵 莉, 田林林, 赵怀慈, 等). Control and Decision(控制与决策), 2024, 39(1): 227.
[16] FENG Xin, FANG Chao, GONG Hai-feng, et al(冯 鑫, 方 超, 龚海峰, 等). Spectroscopy and Spectral Analysis(光谱学与光谱分析), 2023, 43(2): 590.
[17] LI Yan-feng, LIU Ming-yang, HU Jia-ming, et al(李延风, 刘名扬, 胡嘉明, 等). Journal of Jilin University (Engineering and Technology Edition)[吉林大学学报(工学版)], 2024, 54(6): 1777.
[18] YANG Pei, GAO Lei-fu, ZI Ling-ling(杨 培, 高雷阜, 訾玲玲). Journal of Image and Graphics(中国图象图形学报), 2021, 26(10): 2433.
[19] PANG Zhong-xiang, LIU Gui-hua, CHEN Chun-mei, et al(庞忠祥, 刘桂华, 陈春梅, 等). Control and Decision(控制与决策), 2024, 39(3): 910.
[20] Nencini F, Garzelli A, Baronti S, et al. Information Fusion, 2007, 8(2): 143.
[21] Lewis J J, O'Callaghan R J, Nikolov S G, et al. Information Fusion, 2007, 8(2): 119.
[22] GAO Xue-qin, LIU Gang, XIAO Gang, et al(高雪琴, 刘 刚, 肖 刚, 等). Acta Automatica Sinica(自动化学报), 2020, 46(4): 796.
[23] Yu R, Chen W, Zhou D. IEEE Access, 2020, 8: 50091.
[24] Zhang Y, Zhang L, Bai X, et al. Infrared Physics & Technology, 2017, 83: 227.
[25] Ma J, Ma Y, Li C. Information Fusion, 2018, 45: 153.
[26] Toet A. Pattern Recognition Letters, 1989, 9(4): 245.
[27] Liu Y, Chen X, Ward R K, et al. IEEE Signal Processing Letters, 2016, 23(12): 1882.
[28] HE Zhi-bo, ZENG Xiang-jin, DENG Chen, et al(何智博, 曾祥进, 邓 晨, 等). Infrared Technology(红外技术), 2023, 45(6): 598.
[29] Ren Y, Li Z, Xu C. Mathematics, 2023, 11(17): 3689.
[30] Toet A. Data in Brief, 2017, 15: 249.
[31] ZHU Wen-qing, TANG Xin-yi, ZHANG Rui, et al(朱雯青, 汤心溢, 张 瑞, 等). Journal of Infrared and Millimeter Waves(红外与毫米波学报), 2021, 40(5): 696.
[32] TANG Lin-feng, ZHANG Hao, XU Han, et al(唐霖峰, 张 浩, 徐 涵, 等). Journal of Image and Graphics(中国图象图形学报), 2023, 28(1): 3.