|
|
|
|
|
|
Reconstruction of Chinese Paintings Based on Hyperspectral Image
Fusion Using Res-CAE Deep Learning |
ZHU Shi-hao1, FENG Jie1*, LI Xin-ting1, SUN Li-cun1, LIU Jie2, YUAN Ping1, YANG Ren-xiang1, DENG Hong-yang1 |
1. School of Physics and Electronic Information, Yunnan Normal University, Kunming 650500, China
2. Yunnan Museum New Branch, Kunming 650214, China
|
|
|
Abstract Traditional color reproduction methods often suffer from complex preprocessing steps and reliance on subjective selection of spectral features. Moreover, the exclusive use of spectral reflectance data neglects spatial information, limiting reconstruction to isolated color points rather than full scenes. To overcome these limitations, this study proposes a deep learningbased method using a Residual-Convolutional Autoencoder (Res-CAE) to jointly extract and reconstruct spatial and spectral features from hyperspectral data cubes. The Res-CAE model was trained on the CAVE hyperspectral dataset and evaluated across five testing scenarios: a standard 24-color chart (X-Rite), a custom Chinese painting color chart, in-training and out-of-training random scenes, and a real Chinese painting scene captured under CIE standard observer conditions. Evaluation metrics included color difference (ΔE00), root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). Experimental results demonstrate that Res-CAE outperforms traditional methods, such as bilinear interpolation and principal component analysis (PCA), in both color fidelity and image quality. On the 24-color chart, the model achieved an average ΔE00 of 0.694 5, RMSE of 0.009 2, PSNR of 35.92, and SSIM of 0.995 6. These results validate the effectiveness of Res-CAE in high-fidelity color reconstruction from hyperspectral data, offering practical value for digital preservation of traditional Chinese paintings.
|
Received: 2024-11-25
Accepted: 2025-05-29
|
|
Corresponding Authors:
FENG Jie
E-mail: fengjie_ynnu@126.com
|
|
[1] ZHOU Jie, DUAN Lian-ru(周 杰,段炼孺). Identification and Appreciation to Cultural Relics(文物鉴定与鉴赏), 2021, 198: 78.
[2] WANG Hong-mei(王红梅). Oriental Collection(东方收藏), 2021, (13): 76.
[3] XU Guo-bin, YU Yi-ming, LI Jie, et al(徐国彬,于艺铭,李 洁, 等). Laser & Optoelectronics Progress, 2022, 59(10): 81.
[4] CHENG Qing-biao, CHEN Guang-yun, WANG Da-wen, et al(程青彪, 陈广云, 王大文, 等). Optical Instruments(光学仪器), 2022, 44(3): 68.
[5] Zhang L H, Liang D, Zhang D W, et al. Journal of the Optical Society of Korea, 2016, 20(4): 515.
[6] Wang K, Wang H, Wang Z, et al. Cluster Computing, 2019, 22(S1): 493.
[7] YAN Li-xia, WU Fan(闫丽霞, 吴 凡). Computer Applications and Software(计算机应用与软件), 2014, 31(3): 207.
[8] ZHAO Hai, LI Hong-ning, CHEN Hao(赵 海, 李宏宁, 陈 豪). Acta Optica Sinica(光学学报), 2023, 43(9): 269.
[9] WANG Wen-ju, WANG Jiang-wei(王文举, 王江薇). Packaging Engineering(包装工程), 2020, 41(11): 254.
[10] Alvarez Gila A, Van de Weijer J, Garrote E. Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops, 2017, 480.
[11] Arad B, Ben-Shahar O. Computer Vision—14th European Conference, ECCV 2016 Proceedings, PartVII14, Springer: 2016, 19.
[12] Cai Y, Lin J, Hu X, et al. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 17502.
[13] Cai Y, Lin J, Hu X, et al. Proceedings European Conference on Computer Vision, Springer, 2022, 686.
[14] Cai Y, Lin J, Lin Z, et al. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 745.
[15] Urban P, Rosen M R, Berns R S. Journal of the Optical Society of America A, 2009, 26(8): 1865.
[16] LIU Peng-fei, ZHAO Huai-ci, LI Pei-xuan(刘鹏飞, 赵怀慈, 李培玄). Infrared and Laser Engineering(红外与激光工程), 2020, 49(S1): 143.
[17] LI Xin-ting, ZHANG Feng, FENG Jie(李欣庭, 张 峰, 冯 洁). Spectroscopy and Spectral Analysis(光谱学与光谱分析), 2024, 44(1): 215.
[18] Li Y, Hu J, Zhao X, et al. Neurocomputing, 2017, 266: 29.
[19] Shi Z, Chen C, Xiong Z, et al. Proceedings of the IEEE Conference on Computer Visionand Pattern Recognition Workshops, 2018, 939.
[20] Hu X, Cai Y, Lin J, et al. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 17542.
[21] Huang T, Dong W, Yuan X, et al. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, 16216.
|
[1] |
BU Zi-chuan, LIU Ji-hong*, REN Kai-li, LIU Chi, ZHANG Jia-geng, YAN Xue-wen. Raman Spectral Sample Data Enhancement Method Based on Voigt
Function[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2025, 45(09): 2502-2510. |
[2] |
FU Qiang1, GUAN Hai-ou1*, LI Jia-qi2. A Diagnostic Method for Adzuki Bean Rust Based on an Improved E-DWT Algorithm and Deep Learning Model[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2025, 45(09): 2648-2657. |
[3] |
LIU Ao-ran1, 2, MENG Xi2, LIU Zhi-guo2, SONG Yu-fei2*, ZHAO Xue-man2, ZHI Dan-ning2. Hyperspectral Estimation of Soluble Solids Content in Winter Jujube Based on LSTM-TE Model[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2025, 45(08): 2326-2334. |
[4] |
WU Shu-lei1, 2, ZHANG Jia-tian1, 2, WANG Jia-jun2, DANG Shi-jie2, ZHAO Ling-xiao2, CHEN Yi-bo2*. Pathogenic Bacteria Raman Spectrum Classification Method Based on
Diffusion Kernel Attention[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2025, 45(07): 1940-1945. |
[5] |
XU De-fang1, GUAN Hong-pu2, ZHAO Hua-min3, ZHANG Shu-juan3, ZHAO Yan-ru2*. Early Detection Method of Mechanical Damage of Yuluxiang Pear Based on SERS and Deep Learning[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2025, 45(06): 1712-1718. |
[6] |
PENG Jian-heng1, HU Xin-jun1, 2*, ZHANG Jia-hong1, TIAN Jian-ping1, CHEN Man-jiao1, HUANG Dan2, LUO Hui-bo2. Identification and Adulteration Detection of Lotus Root Starch Using
Hyperspectral Imaging Technology Combined With Deep Learning[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2025, 45(06): 1759-1767. |
[7] |
WU Nian-yi1, CANG Hao1, GAO Xiu-wen1, LI Yong-quan1, TAN Fei1, DI Ruo-yu1, RUAN Shi-wei1, GAO Pan1*, LÜ Xin2*. Cotton Verticillium Wilt Severity Detection Based on Hyperspectral
Imaging and SSFNet[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2025, 45(05): 1300-1309. |
[8] |
CHEN Zhuo-ting1, WANG Qiao-hua1, 2*, WANG Dong-qiao1, CHEN Yan-bin1, LI Shi-jun1, 2. Non-Destructive Detection of Pre-Incubation Breeding Duck Egg Fertilization Information Based on Visible/Near Infrared Spectroscopy and Joint Optimization Strategy[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2025, 45(05): 1469-1475. |
[9] |
WANG Ke-ming1, GONG Wei-jia1, WANG Hai-ming2, CAI Yong-jun2, LIU Jia-xing3, SUN Lei4, SONG Li-mei1, LI Jin-yi1*. Hyperspectral Image Detection of Gasoline Pipeline Leakage Using
Improved Unet Network[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2025, 45(05): 1476-1484. |
[10] |
CHEN Jin-ni, TIAN Gu-feng*, LI Yun-hong, ZHU Yao-lin, CHEN Xin, MEN Yu-le, WEI Xiao-shuang. Near-Infrared Spectral Prediction Model for Cashmere and Wool Based on Two-Way Multiscale Convolution[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2025, 45(03): 678-684. |
[11] |
LIU Chang-qing, LING Zong-cheng*. LIBS Quantitative Analysis of Martian Analogues Library (MAL)[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2025, 45(03): 717-725. |
[12] |
ZHANG Ran1, 2, JIN Wei1, 2, MU Ying1, YU Bing-wen2, BAI Yi-wen2, SHAO Yi-bo1, 2, PING Jin-liang3*, SONG Peng-tao3, HE Xiang-yi3, LIU Fei3, FU Lin-lin3. Transformer-Based Method for Segmentation of Gastric Cancer
Microscopic Hyperspectral Images[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2025, 45(02): 551-557. |
[13] |
TANG Bin1, HE Yu-long1, TANG Huan2*, LONG Zou-rong1, WANG Jian-xu1*, TAN Bo-wen2, QIN Dan2, LUO Xi-ling1, ZHAO Ming-fu1. Attention Mechanism Based Hyperspectral Image Dimensionality Reduction for Mold Spot Recognition in Paper Artifacts[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2025, 45(01): 246-255. |
[14] |
WANG Sa1, 2, QU Liang1, 2*, ZHANG Li-fu3, GAO Yu3, LI Guang-hua1, 2, CHANG Jing-jing1, 2. Research on the Inverse Model of Paper Viscosity Based on Hyperspectral Data[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2024, 44(12): 3524-3533. |
[15] |
HUANG Lin-feng1, JIANG Xue-song1, 2*, JIA Zhi-cheng1, ZHOU Hong-ping1, 2, ZHOU Lei1, RONG Zi-fan1. Deep Learning-Based Monitoring of Nutrient Content in Pear Trees[J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2024, 44(12): 3543-3552. |
|
|
|
|