Abstract:Bloodstains are key litigation evidence for exposing and confirming crimes. Still, due to the complexity of the crime scene environment and the diversity of objects, it is challenging to quickly and accurately detect bloodstains and identify their analogues during scene investigation. Given this, this paper proposes Hybrid-PSA, a hyperspectral visual classification model for bloodstain identification based on the attention mechanism, which enables the visual classification and identification of bloodstains and their analogues, such as blood, ketchup, artificial blood, and acrylic paint. The Hybrid-PSA model was designed hierarchically, with a core containing a set of three-level 3D convolutional layers, a polarized self-attention module (PSA), and a 2D convolutional layer. The PSA module is embedded through residual linkage, which employs a dual-branching structure to achieve feature optimization; the channel branch retains 1/2 spectral band and compresses the spatial dimensions to 1×1 to focus on correlation features between spectral bands; the spatial branch maintains the original resolution and compresses the number of channels to 1 to model spatial features. This dual-polarization mechanism strikes a balance between spectral integrity and spatial information modeling, enabling the model to accurately focus on key feature regions while simultaneously improving its feature capture capability and computational efficiency with only a small increase in parameters. To validate the performance of the Hybrid-PSA model, ablation experiments are performed on the 3D convolution module and the attention module using a limited number of training samples, experimentally comparing them with 3DCNN and Hybrid-SN on the publicly available bloodstain hyperspectral dataset, Hyper Blood. The experimental results show that Hybrid-PSA improves the model accuracy from 96% to 99.08% with only a 0.02% increase in the number of parameters, resulting in a 3.08% improvement in accuracy. In terms of visual recognition, 3D-CNN and Hybrid-SN-exhibit high misidentification rates and struggle to classify blood and blood-like traces accurately on red and black backgrounds. The null-spectrum fusion strategy and residual linking of Hybrid-PSA enable the polarized self-attention mechanism to better adapt to the trace target shape and accurately capture the trace boundaries in each iteration, thereby avoiding misclassification of traces at the boundaries due to similarity in color. Avoid misclassification at the boundary due to color similarity, and the visualization classification effect is significantly better than the other two models. The bloodstain classification model based on the attention mechanism proposed in this paper is characterized by high classification accuracy, an excellent visualization effect, and strong generalization ability, and is able to quickly, accurately, and non-destructively discover and identify bloodstains in complex investigation environments.
胡伟成,李云鹏,王宏炜,代雪晶,王华朋. 基于注意力机制的血迹及其类似物高光谱图像可视化识别方法[J]. 光谱学与光谱分析, 2025, 45(09): 2625-2631.
HU Wei-cheng, LI Yun-peng, WANG Hong-wei, DAI Xue-jing, WANG Hua-peng. Hyperspectral Image Visualization and Recognition of Bloodstains and Their Analogues Based on Attention Mechanism. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2025, 45(09): 2625-2631.
[1] LIU Kang-kang, LUO Ya-ping(刘康康, 罗亚平). Laser & Optoelectronics Progress(激光与光电子学进展), 2024, 61(4): 0400005.
[2] JIN Ming, BA Hua-jie, ZHU Ai-hua, et al(金 明, 巴华杰, 朱爱华, 等). Journal of Forensic Medicine(法医学杂志), 2018, 34(2): 157.
[3] LIU Zhao, QIAN Zun-lei, CONG Wei-xuan(刘 兆, 钱尊磊, 丛维萱). Chinese Journal of Forensic Medicine(中国法医学杂志), 2024, 39(5): 596.
[4] Morillas A V, Gooch J, Frascione N, et al. Talanta,2018,184: 1.
[5] Edelman G J, van Leeuwen T G, Aalders M C G, et al. Proceedings of SPIE, 2013, 8743: 87430A.
[6] Li B, Beveridge P, O'Hare W T, et al. Science & Justice,2014,54(6): 432.
[7] HUANG Wei, GAO Lin-mei, GUO Wen-wen, et al(黄 威, 高林妹, 郭文雯, 等). Forensic Science and Technology(刑事技术), 2021, 46(6): 551.
[8] Vasconcelos H C, Meirelles M, Ozmentes R, et al. Coatings, 2025, 15(1): 19.
[9] Dai H, Yue Y, Liu Q,et al. Applied Sciences, 2025, 15(3): 1394.
[10] XU Chen-jie, LI Dan, KONG Fan-qiang(徐陈捷, 李 丹, 孔繁锵). Acta Photonica Sinica(光子学报), 2025, 54(4): 0410002.
[11] Liang Y, Wu J, Zeng Q, et al. Journal of the Brazilian Chemical Society,2024,35(8): e-20240023.
[12] Butt M H F, Ayaz H, Ahmad M, et al. A Fast and Compact Hybrid CNN for Hyperspectral Imaging—Based Bloodstain Classification, 2022 IEEE Congress on Evolutionary Computation (CEC), 2022.
[13] Chen L, Li G, Xie W, et al. Engergies, 2024, 17(20): 5177.
[14] He K M, Zhang X Y, Ren S P, et al. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2016, 770.
[15] FENG Qiang, PAN Bao-zhi, HAN Li-guo (封 强, 潘保芝, 韩立国). Chinese Journal of Geophysics(地球物理学报), 2023, 66(7): 3076.
[16] REN Hui-hui, ZHANG Xin-yu, CHENG Fan-fan, et al(任慧慧, 张馨予, 程凡凡, 等). Laser & Optoelectronics Progress(激光与光电子学进展), 2025,62(13): 1300004.
[17] He J, Chen H, Liu B, et al. Scientific Reports, 2024, 14(1): 19650.
[18] Vandrol J, Perren J, Koller A. Sensors, 2025, 25: 161.
[19] Wirtensohn S, Schmid C, Berthe D, et al. Scientific Reports, 2024, 14: 32169.
[20] Sangeetha S K B, Sandeep Kumar M, Rajadurai H, et al. Scientific Reports, 2024, 14: 31579.
[21] Zhang Y, Zhang L, Guo Z, et al. Sensors, 2024, 24: 5892.