Fine-grained classification of grape leaves via a pyramid residual convolution neural network

Hanghao Li, Yana Wei, Hongming Zhang, Huan Chen, Jiangfei Meng

Abstract


The value of grape cultivars varies. The use of a mixture of cultivars can negate the benefits of improved cultivars and hamper the protection of genetic resources and the identification of new hybrid cultivars. Classifying cultivars based on their leaves is therefore highly practical. Transplanted grape seedlings take years to bear fruit, but leaves mature in months. Foliar morphology differs among cultivars, so identifying cultivars based on leaves is feasible. Different cultivars, however, can be bred from the same parents, so the leaves of some cultivars can have similar morphologies. In this work, a pyramid residual convolution neural network was developed to classify images of eleven grape cultivars. The model extracts multi-scale feature maps of the leaf images through the convolution layer and enters them into three residual convolution neural networks. Features are fused by adding the value of the convolution kernel feature matrix to enhance the attention on the edge and center regions of the leaves and classify the images. The results indicated that the average accuracy of the model was 92.26% for the proposed leaf dataset. The proposed model is superior to previous models and provides a reliable method for the fine-grained classification and identification of plant cultivars.
Keywords: fine-grained classification, grape cultivars identification, pyramid residual network, convolution neural network
DOI: 10.25165/j.ijabe.20221502.6894

Citation: Li H H, Wei Y N, Zhang H M, Chen H, Meng J F. Fine-grained classification of grape leaves via a pyramid residual convolution neural network. Int J Agric & Biol Eng, 2022; 15(2): 197–203.

Keywords


fine-grained classification, grape cultivars identification, pyramid residual network, convolution neural network

Full Text:

PDF

References


FAO statistical database. Faostat. 2019. Available: https://www.fao.org/faostat/. Accessed on [2021-04-25].

Macleod N, Benfield M, Culverhouse P. Time to automate identification. Nature, 2010; 467(7312): 154–155.

Yousefi E, Baleghi Y, Sakhaei S M. Rotation invariant wavelet descriptors, a new set of features to enhance plant leaves classification. Computers and Electronics in Agriculture, 2017; 140: 70–76.

Wu S G, Bao F S, Xu E Y, Wang Y X, Chang Y F, Xiang Q L. A leaf recognition algorithm for plant classification using probabilistic neural network. In: Proceedings of the 2007 IEEE International Symposium on Signal Processing and Information Technology, Giza, Egypt: IEEE, 2007; pp.11–16. doi: 10.1109/ISSPIT.2007.4458016.

Saleem G, Akhtar M, Ahmed N, Qureshi W S. Automated analysis of visual leaf shape features for plant classification. Computers and Electronics in Agriculture, 2019; 157: 270–280.

Xue J R, Fuentes S, Poblete-Echeverría C, Viljo C G, Tongson E, Du H J, et al. Automated Chinese medicinal plants classification based on machine learning using leaf morpho-colorimetry, fractal dimension and visible/near infrared spectroscopy. Int J Agric & Biol Eng, 2019; 12(2): 123–131.

Wang B, Gao Y S, Yuan X H, Xiong S W, Feng X Z. From species to cultivar: Soybean cultivar recognition using joint leaf image patterns by multiscale sliding chord matching. Biosystems Engineering, 2020; 194: 99–111.

Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 2012; 60(6): 84–90. doi: 10.1145/3065386.

He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In: Proceedings of the Computer Vision and Pattern Recognition (CVPR), Las Vegas: IEEE, 2016; pp.770–778. doi: 10.1109/CVPR.2016.90.

Hall D, Mccool C, Dayoub F, Sunderhauf N, Upcroft B. Evaluation of features for leaf classification in challenging conditions. In: Proceedings of the Workshop on Applications of Computer Vision, Waikoloa, USA: IEEE, 2015; pp.797–804. doi: 10.1109/WACV.2015.111.

Pereira C, Morais R, Reis M. Deep learning techniques for grape plant species identification in natural images. Sensors, 2019; 19(22): 4850. doi: 10.3390/s19224850.

Yang H-W, Hsu H-C, Yang C-K, Tsai M-J, Kuo Y-F. Differentiating between morphologically similar species in genus Cinnamomum (Lauraceae) using deep convolutional neural networks. Computers and Electronics in Agriculture, 2019; 162: 739–748.

Kaya A, Keceli A S, Catal C, Yalic H Y, Temucin H, Tekinerdogan B. Analysis of transfer learning for deep neural network based plant classification models. Computers and Electronics in Agriculture, 2019; 158: 20–29.

Tavakoli H, Alirezazadeh P, Hedayatipour A, Banijamali Nasib A H, Landwehr N. Leaf image-based classification of some common bean cultivars using discriminative convolutional neural networks. Computers and Electronics in Agriculture, 2021; 181: 105935. doi: 10.1016/ j.compag.2020.105935.

Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In: Proceedings of the Computer Vision and Pattern Recognition, 2014.

Wang F, Xiang X, Cheng J, Yuille A L. NormFace: L2 hypersphere embedding for face verification. In: Proceedings of the 25th ACM International Conference on Multimedia, 2017; pp.1041–1049. doi: 10.1145/3123266.3123359.

Wang H, Wang Y T, Zhou Z, Ji X, Gong D H, Zhou J C, et al. CosFace: Large margin cosine loss for deep face recognition. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA: IEEE, 2018; pp.5265–5274. doi: 10.1109/CVPR.2018.00552.

Deng J K, Guo J, Xue N N, Zafeiriou S. ArcFace: Additive angular margin loss for deep face recognition. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA: IEEE, 2019; pp.4685–4694. doi: 10.1109/CVPR.2019.00482.

Lin T-Y, Dollar P, Girshick R, He K M, Hariharan B, Belongie S. Feature pyramid networks for object detection. In: Proceedings of the Computer Vision and Pattern Recognition, Honolulu, USA: IEEE, 2017; pp. 936–944. doi: 10.1109/CVPR.2017.106.

Meng Y, Lin C-C, Panda R, Sattigeri P, Karlinsky L, Oliva A, et al. AR-Net: Adaptive frame resolution for efficient action recognition. In: Proceedings of the European Cofference on Computer Vision (2020ECCV), Springer, 2020; pp.86–104. doi: 10.1007/978-3-030-58571-6_6.

Hughes D P, Salathé M. An open access repository of images on plant health to enable the development of mobile disease diagnostics through machine learning and crowdsourcing. 2015. arXiv: 1511.08060.

International Plant Genetic Resources Institute (IPGRI), International Union for the Protection of New Varieties of Plants (UPOV), Office International de la Vigne et du Vin (OIV). Descriptors for grapevine (Vitis spp.). IPGRI, UPOV, OIV, 1997; 62p.

Xu G Q, Li C, Wang Q. Unified multi-scale method for fast leaf classification and retrieval using geometric information. IET Image Processing, 2019; 13(12): 2328–2334.

Selvaraju R R, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice: IEEE, 2017; pp.618–626. doi: 10.1109/ICCV.2017.74.

Hu J, Shen L, Albanie S, Sun G, Wu E H . Squeeze-and-excitation networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018; 42(8): 2011–2023.

Abadi M, Barham P, Chen J M, Chen Z F, Davis A, Dean J. TensorFlow: A system for large-scale machine learning. In: Proceedings of the 12th USENIX conference on Operating Systems and implementation, 2016; 265–283.

Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 2015; 115: 211–252.

Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. Journal of Machine Learning Research-Proceedings Track, 2010; 9: 249–256.




Copyright (c) 2022 International Journal of Agricultural and Biological Engineering

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

2023-2026 Copyright IJABE Editing and Publishing Office