Abstract:The extraction and identification of bloodstains left at crime scenes provide an important basis for case investigation, but their rapid and non-destructive development and examination remain a research hotspot in the field of forensic science. To enhance the development efficiency and detection accuracy of bloodstains, hyperspectral technology has gradually been applied to the non-destructive identification of bloodstains. However, existing hyperspectral imaging techniques have limitations, including low recognition accuracy and insufficient efficiency in identifying bloodstains and blood-like substances, particularly when dealing with bloodstains on complex objects. Therefore, a bloodstain identification model based on hyperspectral imaging technology by integrating the SENet channel attention mechanism with a one-dimensional residual network (ResNet18-1D) was proposed in this paper, aiming to improve the accuracy and efficiency of bloodstain recognition by hyperspectral imaging technology. The SENet channel attention mechanism automatically acquired the importance weight of each feature channel through learning, thereby enhancing effective features and suppressing irrelevant ones. In view of the complexity of the trace-bearing object, this paper improved the traditional SENet module and adopts a dual-branch bottleneck module to enhance the applicability of the model. To address the complex and dynamic nature of forensic practice, this paper conducted two sets of experiments on the public blood detection dataset, which contains multiple traceable objects. (1) Hyperspectral Transductive Classification Scenario. Both training and test sets were derived from the same HSI image. This experiment focused on analyzing substrate interference with bloodstain spectral features. Results show the model achieved an overall accuracy (OA) of 96.8% and an average accuracy (AA) of 97.6% in the complex simulated scenario, representing improvements of 1.3% and 1.9%, respectively, compared to the state-of-the-art Hybrid CNN model. (2) Hyperspectral Inductive Classification Scenario. The model trained on the baseline scenario was directly transferred to test on a different image. This experiment focused on the pre-identification capability for bloodstains and blood-like substances, presenting greater challenges but better reflecting real-world application needs. Experiments showed that the overall accuracy and average accuracy of the model were 63.3% and 65% respectively, which were 2.2% and 1.6% higher than the current optimal RNN model. Through error source analysis, it was found that tomato juice, due to its absorption peak near 470 nm being similar to the characteristic absorption peak of blood at 415 nm, has become the main interference source. In addition to making horizontal comparisons of different algorithms, this paper also verified the impact of the SENet channel attention mechanism module on model performance through ablation experiments. The results showed that the improved SENet channel attention mechanism module, compared with the original SENet channel attention mechanism module, had enhanced the overall and average accuracy of the model in both classification scenarios. Meanwhile, the efficiency test showed that, despite having a large number of parameters, the collaborative design of the residual structure and the dual-branch SENet significantly reduced the computational cost. The training time is only 45 ms·epoch-1, meeting the efficiency requirements of practical combat.
[1] LIU Kang-kang, LUO Ya-ping(刘康康,罗亚平). Laser & Optoelectronics Progress(激光与光电子学进展), 2024, 61(4): 0400005.
[2] GAO Yi, HUANG Tao, HAO Jing-ru, et al(高 毅,黄 涛,郝静如,等). Journal of Forensic Medicine(法医学杂志), 2022, 38(5): 640.
[3] REN Hui-hui, ZHANG Xin-yu, CHENG Fan-fan, et al(任慧慧,张馨予,程凡凡,等). Laser & Optoelectronics Progress(激光与光电子学进展), 2025, 62(13): 190.
[4] Romaszewski M, Głomb P, Sochan A, et al. Forensic Science International, 2021, 320: 110701.
[5] Zhao Y, Hu N, Wang Y, et al. Cluster Computing, 2019, 22(54): 8453.
[6] Ksiek K, Romaszewski M, Głomb P, et al. Sensors, 2020, 20(22): 6666.
[7] Butt M H F, Ayaz H, Ahmad M, et al. A Fast and Compact Hybrid CNN for Hyperspectral Imaging-Based Bloodstain Classification. 2022 IEEE Congress on Evolutionary Computation (CEC), 2022.
[8] Hu J, Shen L, Albanie S, et al. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 2011.
[9] WANG Yan, WANG Zhen-yu(王 燕,王振宇). Journal of Lanzhou University of Technology(兰州理工大学学报), 2024, 50(2): 87.
[10] He K, Zhang X, Ren S, et al. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016:770.
[11] WANG Ya-dong, JIA Jun-wei, TAN Wei-jun, et al(王亚栋,贾俊伟,谭韦君,等). Journal of Instrumental Analysis(分析测试学报), 2024, 43(4): 607.
[12] Hu W, Huang Y, Wei L, et al. Journal of Sensors, 2015, 2015: 258619.
[13] Lee H, Kwon H. IEEE Transactions on Image Processing, 2017, 26(10): 4843.
[14] Hamida A B, Benoit A, Lambert P, et al. IEEE Transactions on Geoscience and Remote Sensing, 2018, 56(8): 4420.
[15] Mou L, Ghamisi P, Zhu X X. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(7): 3639.
[16] YANG Xue-fan, ZHANG Wei, QIU Kai, et al(杨雪凡,张 维,仇 凯,等). Science and Technology of Food Industry(食品工业科技), 2021, 42(13): 284.