Infrared and Visible Image Fusion Based on Improved Latent Low-Rank
and Unsharp Masks
FENG Zhun-ruo1, LI Yun-hong1*, CHEN Wei-zhong1, SU Xue-ping1, CHEN Jin-ni1, LI Jia-peng1, LIU Huan1, LI Shi-bo2
1. School of Electronic Information, Xi'an Polytechnic University, Xi'an 710048, China
2. School of Science, Xi'an Polytechnic University, Xi'an 710048, China
Abstract:To address the challenges of incomplete salient information extraction and detail degradation in infrared and visible light image fusion under low-light conditions, we propose an enhanced fusion algorithm that integrates Latent Low-Rank Representation (LatLRR) with Anisotropic Diffusion-Based Unsharp Mask(ADUSM). Initially, we apply block-wise segmentation and vectorization to the infrared and visible images, subsequently inputting them into the LatLRR model. Through an inverse reconstruction operation, we extract low-rank components from the infrared images and obtain basic salient components from the visible images. Next, the basic salient components undergo processing with ADUSM for pixel-wise differencing, allowing for further decomposition into deep salient detail components and multi-level detail features. Subsequently, the low-rank components are fused utilizing a visual saliency map rule, which enhances the retention and visibility of salient targets in the resultant fused image. For the deep salient detail components, we employ local entropy maximization for fusion, establishing a maximum activity coefficient to preserve the deep salient details effectively, thereby improving the overall quality and visual richness of the fused image. The multi-level detail features are fused using a weighted average strategy based on maximum spatial frequency, which adapts to the multi-level detail features of the input images, thus enhancing the overall clarity and contrast. Finally, we conduct a comparative analysis of our proposed method against Bayesian, Wavelet, LatLRR, MSVD, and MDLatLRR algorithms using the TNO and M3FD datasets. Experimental results demonstrate that our algorithm significantly outperforms traditional low-rank algorithms in average gradient methods, achieving enhancements of 31%, 2.1%, 4.4%, and 34% in average gradient, information entropy, standard deviation, and spatial frequency metrics. Comprehensive subjective and objective evaluations indicate that the fused images produced by our method not only exhibit rich texture details and clear salient targets but also present substantial advantages over various competing methods. This study effectively addresses the issue of incomplete salient information extraction in low-light environments, exhibiting robust generalization capabilities. The integration of improved Latent Low-Rank and ADUSM filtering is demonstrated to be both effective and feasible in the realm of infrared and visible light image fusion, offering significant scientific contributions to the advancement and application of this technology.
冯准若,李云红,陈伟重,苏雪平,陈锦妮,李嘉鹏,刘 欢,李仕博. 改进潜在低秩和反锐化掩模的红外与可见光图像融合[J]. 光谱学与光谱分析, 2025, 45(07): 2034-2044.
FENG Zhun-ruo, LI Yun-hong, CHEN Wei-zhong, SU Xue-ping, CHEN Jin-ni, LI Jia-peng, LIU Huan, LI Shi-bo. Infrared and Visible Image Fusion Based on Improved Latent Low-Rank
and Unsharp Masks. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2025, 45(07): 2034-2044.
[1] JI Jing-yu, ZHANG Yu-hua, XING Na, et al(冀鲸宇, 张玉华, 邢 娜, 等). Spectroscopy and Spectral Analysis(光谱学与光谱分析),2024, 44(5): 1425.
[2] Xing J, Liu Y, Zhang G. Sensors, 2024, 24: 2759.
[3] Liu J, Zhou W, Zhang Y, et al. Optics and Lasers in Engineering, 2024, 179: 108260.
[4] LI Yun-hong, CAO Bin, SU Xue-ping, et al(李云红, 曹 彬, 苏雪平, 等). Infrared and Laser Engineering(红外与激光工程), 2024, 53(10): 117.
[5] Liu X, Huo H, Li J, et al. Information Fusion, 2024, 108: 102352.
[6] Wang L, Zhao P, Chu N, et al. IEEE Sensors Journal, 2022, 22(19): 18815.
[7] Ma J, Ma Y, Li C. Information Fusion, 2019, 45: 153.
[8] Li G, Lin Y, Qu X. Information Fusion, 2021, 71: 109.
[9] Naidu V P S. Defence Science Journal, 2011, 61(5): 479.
[10] Ma J, Zhou Z, Wang B, et al. Infrared Physics & Technology, 2017, 82: 8.
[11] LI Yun-hong, LI Jia-peng, SU Xue-ping, et al(李云红, 李嘉鹏, 苏雪平, 等). Laser and Infrared(激光与红外), 2023, 53(9): 1441.
[12] Liu G, Yan S. Latent Low-Rank Representation for Subspace Segmentation and Feature Extraction, International Conference on Computer Vision. IEEE, 2011: 1615.
[13] Al-Ameen Z, Al-Healy M A, Hazim R A. Journal of Soft Computing and Decision Support Systems, 2020, 7(1): 7.
[14] Wang W, Zhang J, Liu H, et al. Infrared Physics & Technology, 2023, 133: 104828.
[15] Toet A. Data in Brief, 2017, 15: 249.
[16] Liu J, Fan X, Huang Z, et al. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 5802 (arXiv. 2203.16220).
[17] Zhao Z, Xu S, Zhang C, et al. Signal Processing, 2020, 177: 107734.
[18] Pajares G, de la Cruz J M. Pattern Recognition, 2004, 37(9): 1855.
[19] You C Z, Palade V, Wu X J. Engineering Applications of Artificial Intelligence, 2019, 77: 117.
[20] Li H, Wu X J, Kittler J. IEEE Transactions on Image Processing, 2020, 29: 4733.