Hyperspectral Mural Image Inpaint Based on Spatial-Spectral Enhancement Transform
ZHANG Mian1, ZHAO Jia-yu1*, ZHOU Han2, LIAN Yu-sheng2
1. School of Intelligence Science and Technology, Beijing University of Civil Engineering and Architecture, Beijing 102616, China
2. School of Printing and Packaging, Beijing Institute of Graphic Communication, Beijing 100026, China
Abstract:The non-destructive murals inpainting and colored paintings are an important topic and research hotspot for protecting and inheriting architectural cultural heritage. Hyperspectral imaging technology can simultaneously obtain two-dimensional spatial information and one-dimensional spectral information of targets.It has become an important technical means for digital collection, restoration, and analysis of cultural relics to conduct spectral digital collection and non-destructive analysis of murals and painted cultural relics without contact and without independent samples. Existing RGB color mural inpainting methods can not realize the collection, restoration and analysis of multi band spectral information in hyperspectral images; In addition, the existing depth generating color mural inpainting methods based on convolutional neural network have shortcomings such as insufficient modeling ability of spatial structure and spectral characteristics, weak ability of global information exploration and modeling, which seriously affect the mural inpainting accuracy. In order to solve the above problems, this paper proposes a hyperspectral mural inpainting method based on the space spectrum enhancement Transformer. Firstly, the hyperspectral mural inpainting to be repaired is reduced in spectral dimension and converted into an RGB color image. Then, the space and color information of the RGB color image is repaired by using the proposed generation countermeasure network based on the space spectrum enhanced Transformer. The repair network proposed in this paper is divided into a spatial information pre-repair network (Spa-PIN) and a spatial color information repair network (Spa-Color-IN). The effective repair of mural images is achieved by combining a spatial attention and spectral attention module (SAESA). In the reconstruction phase of spatial information structure, the basic shape and texture reconstruction of mural inpainting are emphasized. In the phase of color restoration, spatial attention and spectral attention are enhanced to improve the quality of restoration. Finally, using the proposed clustering BPNN, the dimension of the repaired RGB image is upgraded and reconstructed, and the repaired target hyperspectral image data cube is obtained. The attention mechanism of the space spectrum enhancement Transformer proposed in this paper performs spatial coordinate convolution fusion and spectral cube local global attention fusion on image features, which can simultaneously model the spatial spectrum correlation of the image in the global and local ranges, and enhance the repair ability of spatial spectrum details. The experimental results on the public datasets show that, compared with the current three advanced restoration methods, the method proposed in this paper has the optimal quantitative indicators and mural inpainting effect. It can effectively and accurately restore hyperspectral mural inpainting and provide new advanced technical means for high-precision collection, restoration, and analysis of architectural heritage such as mural inpaintings.
[1] Xu Z, Zhang X, Chen W, et al. IEEE Transactions on Emerging Topics in Computational Intelligence, 2024, 8(3): 2169.
[2] Shao H, Xu Q, Wen P, et al. Building Bridge Across the Time: Disruption and Restoration of Murals In the Wild. Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 20259.
[3] Karella T, Blaek J, Striová J. Journal of Cultural Heritage, 2022, 58: 186.
[4] Amiri M M, Messinger D W, Hanneken T R. Journal of Cultural Heritage, 2024, 68: 136.
[5] HOU Miao-le, PAN Ning, MA Qing-lin, et al(侯妙乐, 潘 宁, 马清林, 等). Spectroscopy and Spectral Analysis(光谱学与光谱分析), 2017, 37(6): 1852.
[6] Smith M J, Holmes-Smith A S, Lennard F. Journal of Cultural Heritage, 2019, 39: 32.
[7] Goodfellow I J, Pouget-Abadie J, Mirza M, et al. Generative Adversarial Nets, Proceedings of the 28th International Conference on Neural Information Processing Systems-Volume 2, 2014, 2672.
[8] Radford A, Metz L, Chintala S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv: 1511. 06434v2, 2016.
[9] Arjovsky M, Chintala S, Bottou L. Wasserstein Generative Adversarial Networks. International Conference on Machine Learning. PMLR, 2017: 214.
[10] Yu J, Lin Z, Yang J, et al. Free-Form Image Inpainting with Gated Convolution. Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 4471.
[11] Lugmayr A, Danelljan M, Romero A, et al. Repaint: Inpainting Using Denoising Diffusion Probabilistic Models. Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition. 2022: 11461.
[12] Li J, Wang N, Zhang L, et al. Recurrent Feature Reasoning for Image Inpainting, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, 7757.
[13] Li L, Zou Q, Zhang F, et al. Line Drawing Guided Progressive Inpainting of Mural Damages. 2024,arXiv: 2211. 06649V2.
[14] Deng X, Yu Y. Heritage Science, 2023, 11(1):131.
[15] Vaswani A, Shazeer N, Parmar N, et al. Attention is All You Need, Proceedings of the 31st International Conference on Neural Information Prcessing Systems, 2017, 6000.
[16] Cao X, Lian Y, Li J, et al. Optics & Laser Technology, 2024, 176: 111032.
[17] Zhou H, Lian Y, Li J, et al. Optics and Lasers in Engineering, 2024, 175: 108030.
[18] Cao X, Lian Y, Li J, et al. IEEE Transactions on Geoscience and Remote Sensing, 2024, 176: 111032.
[19] Zou Z, Zhao P, Zhao X. Advanced Engineering Informatics, 2021, 50: 101421.
[20] Zeng Z, Qiu S, Zhang P, et al. Heritage Science, 2024, 12(1): 410.
[21] Sun P, Hou M, Lyu S, et al. Sensors, 2022, 22(24): 9780.
[22] Xiao H, Zheng H, Meng Q. IEEE Access, 2023, 11: 71472.
[23] Ronneberger O, Fischer P, Brox T. U-net: Convolutional Networks for Biomedical Image Segmentation. in 2015 Medical Image Computing and Computer-Assisted Intervention, 2015, 234.
[24] Comaniciu D, Meer P. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 24(5): 603.
[25] Gatys L A, Ecker A S, Bethge M. Image Style Transfer Using Convolutional Neural Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, 2414.
[26] Risser E, Wilmot P, Barnes C. Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses. 2017, arXiv: 1701. 08893V2.
[27] Liu G, Reda F A, Shih K J, et al. Image Inpainting for Irregular Holes Using Partial Convolutions. Proceedings of the European Conference on Computer Vision (ECCV). 2018, 85.
[28] Cai Y, Lin J, Lin Z, et al. MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 745.
[29] Cao X, Lian Y, Liu Z, et al. Optical Engineering, 2023, 62(3): 033107.