Segmentation of Cucumber Target Leaf Spot Based on U-Net and Visible Spectral Images
WANG Xiang-yu1, LI Hai-sheng1, LÜ Li-jun1, HAN Dan-feng1, WANG Zi-qiang2*
1. Department of Electronic Information and Physics, Changzhi University, Changzhi 046011, China
2. Industrial Technology Center, Chengde Petroleum College, Chengde 067000, China
Abstract:Target leaf spot is one of the main fungous diseases of cucumber. Under suitable conditions, especially under the conditions of the large temperature difference between day and night or saturated humidity, the disease develops rapidly, leads to the reduction of cucumber yield and brings economic losses. The cucumber target leaf spot segmentation can provide an effective basis for the identification and diagnosis of cucumber disease, which has great significance. In this study, a cucumber spectral image was taken as the research object, and U-net deep learning network was utilized to construct the semantic segmentation model for cucumber target leaf spot segmentation. Firstly, the regions with more prominent lesions in the visible spectrum images were selected for training and testing. We captured 135 regions out of 40 images as samples, and each region was 200×200 pixel. The Image labeler tool of Matlab was used to label the samples to mark the affected area and the healthy area. Then, the U-net network was constructed, which contains 46 layers and 48 connections. The cucumber target leaf spots’ feature extraction is completed by convolution layer, ReLU layer and max-pooling. The upsampling is completed by deep connection layer, up convolution layer and up-ReLU. The copy and crop operations and feature fusion are completed by skip connection. The U-net was used for training to get the semantic segmentation model. From 135 samples, 96 were randomly selected as training samples and the remaining 39 as test samples. Set the iterations 240, L2 regularization coefficient 0.000 1, initial learning rate 0.05, momentum parameter 0.9, gradient threshold 0.05, and then utilize the samples for training and testing. After 10 repeated training and testing, the results showed that the average execution time of the semantic segmentation model based on U-net and visible spectrum images was 46.4 s. The average memory occupation was 6 665.8 MB, and it shows that the model has a high execution efficiency. The pixel accuracy of the model was 96.23% ~ 97.98%, mean pixel accuracy was 97.28%~97.87%, mean intersection over union was 86.10%~91.59%, frequency weighted intersection over union was 93.33%~96.19%. It shows that the model has good stability and strong generalization ability. This research used less training samples to obtain a segmentation model with high accuracy, which provides a reference for small sample machine learning and provides a method basis for other vegetable disease spot segmentation, disease identification and diagnosis.
王翔宇,李海生,吕丽君,韩丹枫,王梓强. 基于U-net和可见光谱图像的黄瓜褐斑病分割[J]. 光谱学与光谱分析, 2021, 41(05): 1499-1504.
WANG Xiang-yu, LI Hai-sheng, LÜ Li-jun, HAN Dan-feng, WANG Zi-qiang. Segmentation of Cucumber Target Leaf Spot Based on U-Net and Visible Spectral Images. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2021, 41(05): 1499-1504.
[1] Yu G, Yu Y, Fan H, et al. Biochemistry Biokhimiia, 2019, 84(8): 963.
[2] Duan Yabing, Xin Wenjing, Lu Fei, et al. Pesticide Biochemistry and Physiology, 2019, 153: 95.
[3] LI Xiao-hong(李晓红). China Vegetables (中国蔬菜), 2016(3): 66.
[4] LAN Guo-bing, TAN Yao-hua, HE Zi-fu, et al(蓝国兵,谭耀华,何自福,等). Plant Protection(植物保护), 2012, 38(5): 197.
[5] REN Shou-gang, JIA Fu-wei, GU Xing-jian, et al(任守纲,贾馥玮,顾兴健,等). Transactions of the Chinese Society of Agricultural Engineering(农业工程学报), 2020, 36(12): 186.
[6] BAI Xue-bing, YU Jian-shu, FU Ze-tian, et al(白雪冰,余建树,傅泽田,等). Spectroscopy and Spectral Analysis(光谱学与光谱分析), 2019, 39(11): 3592.
[7] Zahra Ebrahimi, Mohammad Loni, Masoud Daneshtalab, et al. Expert Systems with Applications: X, 2020, 7: 1.
[8] Nicholas Polson, Vadim Sokolov. Wiley Interdisciplinary Reviews: Computational Statistics, 2020, 12(5): 1.
[9] HU Yue, LUO Dong-yang, HUA Kui, et al(胡 越,罗东阳,花 奎,等). CAAI Transactions on Intelligent Systems(智能系统学报), 2019, 14(1): 1.
[10] ZHAO Xin-yang, CAI Chao-peng, WANG Si, et al(赵欣洋,蔡超鹏,王 思,等). Light Industry Machinery(轻工机械), 2019, 37(3): 60.
[11] Long J, Shelhamer E, Darrell T. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2014, 39(4): 640.
[12] XU Wen-bo, REN Ya-feng, HAN Bing(徐文博,任亚峰,韩 冰). Journal of Mechanical Transmission(机械传动), 2020, 44(8): 78.
[13] LI Xiao-juan, XU Zeng-bing, XIONG Wen, et al(李小娟,徐增丙,熊 文,等). Journal of Vibration and Shock(振动与冲击), 2020, 39(15): 25.
[14] GUO Lin, QIN Shi-yin(郭 琳,秦世引). Journal of Beijing University of Aeronautics and Astronautics(北京航空航天大学学报), 2019, 45(1): 159.
[15] Paul H Yi, Jinchi Wei, Tae Kyung Kim, et al. The Knee, 2019, 27(2): 535.
[16] YANG Sen, FENG Quan, ZHANG Jian-hua, et al(杨 森,冯 全,张建华,等). Transactions of the Chinese Society for Agricultural Machinery(农业机械学报), 2020, 51(7): 22.
[17] XUE Yong, WANG Li-yang, ZHANG Yu, et al(薛 勇,王立扬,张 瑜,等). Transactions of the Chinese Society for Agricultural Machinery(农业机械学报), 2020, 51(7): 30.
[18] Nikos Petrellis. Symmetry-basel, 2018, 10(7): 270.
[19] Ruihui Mu, Xiaoqin Zeng. KSII Transactions on Internet and Information Systems, 2019, 13(4): 1738.
[20] Yi Zhike, Chang Tao, Li Shuai, et al. IEEE Access, 2019, 7: 69184.
[21] Zhang Sanxing, Ma Zhenhuan, Zhang Gang, et al. Symmetry-basel, 2019, 12(3): 427.