Study on Prediction Models for Leaf Area Index of Multiple Crops Based on Multi-Source Information and Deep Learning
HAO Zi-yuan1, YANG Wei1*, LI Hao1, YU Hao1, LI Min-zan1, 2
1. Key Lab of Smart Agriculture System Integration, Ministry of Education, China Agricultural University, Beijing 100083, China
2. Key Lab of Agricultural Information Acquisition Technology, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100083, China
Abstract:Leaf area index (LAI) is an important parameter for evaluating crop growth, rapid, accurate and low-cost acquisition of LAI has great significance for guiding crop field management. To achieve low-cost acquisition of LAI for multiple crops, the general LAI prediction models were built based on multi-source information and deep learning. The field experiments were carried out in six growth periods of soybean, wheat, peanut, and maize to obtain multi-source information for modeling. In addition, relevant one-dimensional data were collected, including UAV flight attitude angles, image capture height, crop growth states and environmental illumination. With the help of the excellent image and data processing ability of deep learning, the LAI prediction models were built accurately based on complex input information. Considering that the one-dimensional data also participated in the training process of the models, the combined network architecture was adopted in the design of the models. Based on extracting image depth features by the convolutional neural network (CNN) algorithm, the LightGBM (Light Gradient Boosting Machine Method) algorithm was added to realize the final prediction of crop LAI by combining image features and one-dimensional data. Four common network structures, VGG19, ResNet50, Inception V3 and DenseNet201, were used in four CNN models. In order to better illustrate the ability of CNN models to extract image features, the crop classification results of the four models under different image inputs were analyzed. The results showed that the classification accuracyof the four models with inputs using multispectral images was better than that of inputs using visible images only. The classification accuracy of the models based on Inception V3 and DenseNet201 was more than 99%, which proved the validity of the CNN model in extracting multispectral image features. Taking the image features as inputs of the LightGBM model to predict LAI, the results were shownthat the maximum R2 betweenthe measured value and the predicted value of LAI is 0.819 2. After one-dimensional data were addedtothe inputs, the R2 of the models can reach more than 0.9, which indicates that multi-source information inputs play an important role in improving the accuracy of the LAI prediction models. The models established in this study can predict LAI for multiple crops without the complex processing for multispectral images. Therefore, this study can realize the low-cost and rapid prediction of LAI and have high LAI prediction accuracy.
郝子源,杨 玮,李 浩,于 滈,李民赞. 基于多源信息和深度学习的多作物叶面积指数预测模型研究[J]. 光谱学与光谱分析, 2023, 43(12): 3862-3870.
HAO Zi-yuan, YANG Wei, LI Hao, YU Hao, LI Min-zan. Study on Prediction Models for Leaf Area Index of Multiple Crops Based on Multi-Source Information and Deep Learning. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2023, 43(12): 3862-3870.
[1] Parker G G. Forest Ecology and Management, 2020, 477(1): 118496.
[2] Yan G, Hu R, Luo J, et al. Agricultural and Forest Meteorology, 2019, 265: 390.
[3] SU Wei, GUO Hao, ZHAO Dong-ling, et al(苏 伟, 郭 皓, 赵冬玲, 等). Transactions of the Chinese Society for Agricultural Machinery(农业机械学报), 2016, 47(3): 234.
[4] Gong Y, Yang K, Lin Z, et al. Plant Methods, 2021, 17(1): 1.
[5] CHEN Yu-qing, YANG Wei, LI Min-zan, et al(陈玉青,杨 玮,李民赞,等). Transactions of the Chinese Society for Agricultural Machinery(农业机械学报), 2017, 48(S1): 123.
[6] Apolo-Apolo, Pérez-Ruiz, Martínez-Guanter, et al. Agronomy, 2020, 10(2): 175.
[7] WANG Xiang-yu, YANG Han, LI Xin-xing, et al(王翔宇, 杨 菡, 李鑫星, 等). Spectroscopy and Spectral Analysis(光谱学与光谱分析), 2021, 41(1): 265.
[8] Ahmad A, Saraswat D, Aggarwal V, et al. Computers and Electronics in Agriculture, 2021, 184(3): 106081.
[9] Wang F, Wang R, Xie C, et al. Computers and Electronics in Agriculture, 2021, 187: 106268.
[10] MA Jun-cheng, LIU Hong-jie, ZHENG Fei-xiang, et al(马浚诚, 刘红杰, 郑飞翔, 等). Transactions of the Chinese Society of Agricultural Engineering(农业工程学报), 2019, 35(5): 183.
[11] Cai W, Wei R, Xu L, et al. Information Processing in Agriculture, 2021, 9(3): 343.
[12] Tetila E C, Machado B B, Belete N, et al. IEEE Geoscience and Remote Sensing Letters, 2017, 14(12): 2190.
[13] Flavia T, Maurizio P, Salvatore G. Journal of Hydrology, 2016, 540: 240.
[14] Arnal B. Computers and Electronics in Agriculture, 2018, 153: 46.
[15] Scotford I M, Miller P. Biosystems Engineering, 2004, 89(4): 395.
[16] Yang X, Mu X, Wang D, et al. Proceedings of SPIE—The International Society for Optical Engineering, 2007, 6752: 589.
[17] Weber C R, Shibles R M, Byth D E. Agronomy Journal, 1966, 58(1): 99.
[18] Gong Y, Yang K, Lin Z, et al. Plant Methods, 2021, 17(1): 1.
[19] Lei L, Qiu C, Li Z, et al. Remote Sensing, 2019, 11(9): 1067.