Segmentation Method for Crop Leaf Spot Based on Semantic Segmentation and Visible Spectral Images
LI Kai-yu1, ZHANG Hui2, MA Jun-cheng3, ZHANG Ling-xian1*
1. College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
2. College of Information and Management Science, Henan Agricultural University, Zhengzhou 450018, China
3. Institute of Environment and Sustainable Development in Agriculture, Chinese Academy of Agricultural Sciences, Beijing 100081, China
Abstract:Diseases affect crop quality seriously and cause economic losses. Disease spot segmentation is an important process of identification and disease severity estimation, whose segmentation results can provide an effective basis for subsequent identification and severity estimation. Due to the irregularity and complexity of lesions, and the visible spectrum image of lesions in the natural environment is susceptible to be change in illumination, traditional image processing methods have low accuracy, low universality and robustness for image segmentation of lesions. In this regard, this article proposed a method for the segmentation of crop leaf diseases based on semantic segmentation and visible spectrum images. Firstly, taking peanut brown spot and tobacco brown spot as the research objects, 165 visible spectrum images were collected using a Nikon D300s SLR camera. The visible spectrum images of the diseases were pixel-labeled through the Matlab Image Labeler APP, and the brown spot and background area were respectively marked. Secondly, the labeled dataset adopted image enhancement methods such as horizontal flipping, vertical flipping, changing brightness, etc., to obtain 1 850 enhanced sample data sets and randomly divided them into the training set, validation set and test set according to the ratio of 8∶1∶1. At the same time, in order to save computational cost, the pixel resolution of the data set was adjusted to 300×300. Finally, four types of disease spot segmentation models were constructed based on the three semantic segmentation networks of FCN, SegNet and U-Net. The effects of data enhancement and disease types on the lesion segmentation model were explored. Four segmentation indicators were used to evaluate the model’s segmentation effect. The test results showed that only for lesion segmentation, image enhancement could improve the segmentation accuracy of the model. The model’s Mean Precision (MP) and Mean Intersection over Union (MIoU) were 95.71% and 93.36%, respectively. The 4 semantic segmentation models were significantly better than the Support Vector Machine (SVM). Compared with the U-Net, SegNet-2 and SegNet-4 segmentation models, FCN can effectively avoid the influence of light changes. The Precision (P) of lesion segmentation and the Intersection over Union(IoU) reached 99.25% and 97.55%, respectively. For the lesion classification and segmentation experiment, the Precision (P) of FCN for the two diseases reached 90.41% and 97.54%, and the Intersection over Union(IoU) of the two diseases reached 95.61% and 70.30%, respectively, which were better than the other three segmentation models. FCN can distinguish disease types well while segmenting disease spots, which has good generalization and robustness and realize the identification and segmentation of disease spots in natural scenes and provide a technical reference for the severity estimation of mixed diseases.
[1] Ma J, K Du, Zheng F, et al. Computers and Electronics in Agriculture, 2018, 154: 18.
[2] Zhang S, You Z, Wu X. Neural Computing and Applications, 2019, 31(2): 1225.
[3] Zhang S, Wu X, You Z, et al. Computers and Electronics in Agriculture, 2017, 134: 135.
[4] LIAO Juan, CHEN Min-hui, ZHANG Kai, et al(廖 娟,陈民慧,张 锴,等). Transactions of the Chinese Society of Agricultural Machinery(农业机械学报), 2021, 52(12): 171.
[5] Alencastre M M, Johnson R R, Krebs H I. IEEE Transactions on Industrial Informatics, 2021, 17(2): 787.
[6] Xiong L, Zhang D, Li K, et al. Soft Computing, 2020, 24(10): 7253.
[7] Chen L C, Papandreou G, Kokkinos I, et al. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4): 834.
[8] CHEN Yan, ZHU Cheng-yu, HU Xiao-chun, et al(陈 燕,朱成宇,胡小春,等). Transactions of the Chinese Society of Agricultural Machinery(农业机械学报), 2021, 52(7): 169.
[9] WANG Can, WU Xin-hui, ZHANG Yan-qing, et al(王 璨,武新慧,张燕青,等). Transactions of the Chinese Society of Agricultural Engineering(农业工程学报), 2021, 37(9): 211.
[10] GU Xing-jian, ZHU Jian-feng, REN Shou-gang, et al(顾兴健,朱剑峰,任守纲,等). Computer Science(计算机科学), 2021, 48(S2): 360.
[11] SUN Jun, TAN Wen-jun, WU Xiao-hong, et al(孙 俊,谭文军,武小红,等). Transactions of the Chinese Society of Agricultural Engineering(农业工程学报), 2019, 35(12): 184.
[12] Li Y X, Liu H J, Ma J C, et al. Computers and Electronics in Agriculture, 2021, 190: 106480.