Application of CNN in LIF Fluorescence Spectrum Image Recognition of Mine Water Inrush
ZHOU Meng-ran1, LAI Wen-hao1*, WANG Ya1, 2, HU Feng1, LI Da-tong1, WANG Rui1
1. School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan 232000, China
2. School of Computer and Information Engineering, Fuyang Normal University, Fuyang 236000, China
Abstract:Rapid identification of mine water inrush has great significance for mine safety production. The identification method of laser induced fluorescence(LIF) in mine water inrush requires to pretreat and characterizing the spectral curve is complicated. Therefore, a method to quickly identify the type of mine water inrush by using the convolutional neural network(CNN) was proposed. According to the coal mine water distribution characteristics and the most common type of water inrush, we selected three kinds of raw water samples and two kinds of mixed water mixed by the original water as experimental material, in the experiment, we used LIF technology to quickly obtain 200 sets of fluorescence spectrum curves of 5 kinds of water samples. After gray degree transformation, the fluorescence spectrum curves inputed into CNN algorithm,150 groups of spectrum as the training set while the rest 50 groups of spectrum as the test set. In the model test, CNN’s recognition rate was 100%. The experimental results showed that the CNN algorithm can not only save the data processing and feature extraction in the image of recognition of mine water inrush, but also quickly and effectively identify the type of mine water inrush.
周孟然,来文豪,王 亚,胡 锋,李大同,王 锐. CNN在煤矿突水水源LIF光谱图像识别的应用[J]. 光谱学与光谱分析, 2018, 38(07): 2262-2266.
ZHOU Meng-ran, LAI Wen-hao, WANG Ya, HU Feng, LI Da-tong, WANG Rui. Application of CNN in LIF Fluorescence Spectrum Image Recognition of Mine Water Inrush. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2018, 38(07): 2262-2266.
[1] WANG Xin-yi, XU Tao, HUANG Dan(王心义, 徐 涛, 黄 丹). Journal of China Coal Society(煤炭学报), 2011, 36(8): 1354.
[2] LI Yuan-zhi, NIU Guo-qing, LIU Hui-ling(李垣志, 牛国庆, 刘慧玲). Journal of Safety Science and Technology(中国安全生产科学技术), 2016, 12(7): 77.
[3] WANG Jia-yang, LI Zuo-yong, ZHANG Xue-qiao, et al(汪嘉杨, 李祚泳, 张雪乔, 等). Safety and Environmental Engineering(安全与环境工程), 2013, 20(5): 118.
[4] WANG Ya, ZHOU Meng-ran, YAN Peng-cheng, et al(王 亚, 周孟然, 闫鹏程, 等). Spectroscopy and Spectral Analysis(光谱学与光谱分析), 2017, 37(3): 978.
[5] Zeiler M D, Fergus R. Visualizing and Understanding Convolutional Networks. LNCS 8689. Berlin: Springer, 2014. 818.
[6] Donahue J. Computer Science,2013,50(1): 815.
[7] Simonyan K. Computer Science, 2014, 1556.
[8] Szegedy C, Liu W, Jia Y. Going Deeper with Convolutions. IEEE Computer Society, 2015. 1.
[9] Karpathy A. Large-Scale Video Classification with Convolutional Neural Networks. IEEE Computer Society, 2014. 1725.
[10] Sermanet P, et al. Pedestrian Detection with Unsupervised Multi-Stage Feature Learning. IEEE Computer Society, 2013. 3626.
[11] Tshev A. Deep Pose: Human Pose Estimation via Deep Neural Networks. IEEE Computer Society, 2014. 1653.
[12] Kalchbrenner N, Blunsom P. A Convolutional Neural Network for Modelling Sentences. 2016-01-07.
[13] Abdel-Hamid O, Jiang H, et al. IEEE/ACM Transactions on Audio, Speech and Language Processing, 2014, 22: 1533.
[14] LIU Wan-jun, LIANG Xue-jian, QU Hai-cheng(刘万军, 梁雪剑, 曲海成). Journal of Image and Graphics(中国图象图形学报), 2016,21(9): 1178.
[15] LI Yan-dong, HAO Zong-bo, LEI Hang(李彦冬, 郝宗波, 雷 航). Journal of Computer Applications(计算机应用), 2016,36(9): 2508.