Abstract:To improve the processing performance of source images with noise and improve the contrast and structural information of the fused images, an infrared and visible image fusion algorithm based on three-scale decomposition and sparse representation is proposed. First, to enhance the noise removal effect and maintain the structure and edge characteristics of the source image, the rolling guidance filter is used to decompose the source image, and the source image is decomposed into the base layer and the detail layer. Secondly, in order to make full use of the details and energy in the base component and reduce the complexity of the model, a structure-texture decomposition model is constructed. The base layer is decomposed into the base structure layer and base texture layer again, Then by analyzing the different characteristics of the three components, different fusion rules are utilized to fuse the three components respectively. The detail component contains the main noise components, but the noise level is different. Therefore, the sparse fusion denoising parameter is adaptively determined according to the noise level of the image to realize the fusion and denoising of the detail component at the same time and can effectively improve computational efficiency; for the base structure components, which contain fewer detail features, the weighted average technology based on the visual saliency map is directly used for pre-fusion; for the base texture components, because they contain visually important information or image features, such as edges, lines and contours and other activity information, which can reflect the main details of the original base image, the principal component analysis method is used for pre-fusion. Finally, the fused image is obtained by reconstructing the detail, base structure and texture layers. In order to verify the effectiveness of the proposed method, several groups of infrared and visible images were selected for experiments and compared with five recent methods, including CNN, FPDE, ResNet, IFEVIP and TIF, and the results were analyzed in a subjective and objective form. The experimental results show that, compared with other image fusion algorithms, this method can consider the fusion of noise-perturbed and noise-free images and can retain the details, brightness and structure of the source image in the fusion image with or without noise. Moreover, can effectively eliminate noise.
冀鲸宇,张玉华,邢 娜,王长龙,林志龙,姚江毅. 三尺度分解和稀疏表示的红外和可见光图像融合[J]. 光谱学与光谱分析, 2024, 44(05): 1425-1438.
JI Jing-yu, ZHANG Yu-hua, XING Na, WANG Chang-long, LIN Zhi-long, YAO Jiang-yi. Three-Scale Deconstruction and Sparse Representation of Infrared and Visible Image Fusion. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2024, 44(05): 1425-1438.
[1] SHEN Yu, YUAN Yu-bin, PENG Jing, et al(沈 瑜, 苑玉彬, 彭 静,等). Spectroscopy and Spectral Analysis(光谱学与光谱分析), 2021, 41(7): 2023.
[2] Angel Y, Turner D, Parkes D, Parkes S, et al. Remote Sensing, 2020, 12(1): 34.
[3] Li S, Kwok J T, Wang Y. Information Fusion, 2001, 2(3): 169.
[4] Kong Weiwei. Infrared Physics & Technology, 2014, 63: 110.
[5] Long J, Shelhamer E, Darrell T. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 39(4): 640.
[6] Li S, Yin H, Fang L. IEEE Transactions on Bio-Medical Engineering, 2012, 59(12): 3450.
[7] Elad M, Aharon M. Image Denoising Via Learned Dictionaries and Sparse Representation. IEEE Computer Society Conference on Computer Vision & Pattern Recognition. New York, USA: IEEE, 2006: 3736.
[8] Yang B, Li S. IEEE Transactions on Instrumentation and Measurement, 2010, 59(4): 884.
[9] Liu Y, Wang Z. IET Image Processing, 2015, 9(5): 347.
[10] Kim M, Han D K, Ko H. Information Fusion, 2016, 27: 198.
[11] Li H, Wang Y, Yang Z, et al. IEEE Transactions on Instrumentation and Measurement, 2020, 69(4): 1082.
[12] Zhang Qi, Shen X Y, Xu L, et al. Rolling Guidance Filter. European Conference on Computer Vision,2014.
[13] He K, Sun J, Tang X. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 35: 1397
[14] Lee H, Jeon J, Kim J, et al. Computer Graphics Forum,2017, 36(6): 262.
[15] Zhu B, Liu J Z, Cauley S F, et al. Nature, 2017, 555(7697): 487.
[16] Aharon M, Elad M, Bruckstein A. IEEE Transactions on Signal Processing, 2006, 54(11): 4311.
[17] Donoho D L, Tsaig Y, Drori I, et al. IEEE Transactions on Information Theory, 2012, 58(2): 1094.
[18] Cheng M N, Mitra N J, Huang X, et al. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 37(3): 569.
[19] Gonzalez R C, Woods R E, Masters B R. Journal of Biomedical Optics, 2009, 14: 029901.
[20] Liu Y, Chen X, Cheng J, et al. International Journal of Wavelets, Multiresolution and Information Processing, 2018, 16(3): 1850018.
[21] Bavirisetti D P, Xiao G, Liu G. Multi-Sensor Image Fusion Based on Fourth Order Partial Differential Equations. 20th International Conference on Information Fusion (Fusion), 2017.
[22] Li H, Wu X J, Durrani T S. Infrared Physics & Technology, 2019, 102: 103039.
[23] Zhang Y, Zhang L J, Bai X Z. Infrared Physics & Technology, 2017, 83: 227.
[24] Bavirisetti D P, Dhuli R. Infrared Physics & Technology, 2016, 76: 52.
[25] Chibani Y. Isprs Journal of Photogrammetry & Remote Sensing, 2006, 60(5): 306.
[26] Xydeas C S, Petrovic V. Objective Image Fusion Performance Measure, IEEE Xplore, 2000, doi: 10.1049/el:20000267.
[27] Chen Yin, Blum R S. Image & Vision Computing, 2009, 27(10): 1421.
[28] Qu G, Zhang D, Yan P. Electronics Letters, 2002, 38(7): 313.
[29] Wang Qiang, Shen Y, Jin J. Image Fusion, 2008, 19: 469.
[30] Yang C, Zhang J Q, Wang X R, et al. Information Fusion, 2008, 9(2): 156.