Convolutional neural network-based automatic image recognition for agricultural machinery

Kun Yang, Hui Liu, Pei Wang, Zhijun Meng, Jingping Chen

Abstract


An internet of things-based subsoiling operation monitoring system for agricultural machinery is able to identify the type and operating state of a certain machinery by collecting and recognizing its images; however, it does not meet regulatory requirements due to a large image data volume, heavy workload by artificial selective examination, and low efficiency. In this study, a dataset containing machinery images of over 100 machines was established, which including subsoilers, rotary cultivators, reversible plows, subsoiling and soil-preparation machines, seeders, and non-machinery images. The images were annotated in tensorflow, a deep learning platform from Google. Then, a convolutional neural network (CNN) was designed for targeting actual regulatory demands and image characteristics, which was optimized by reducing overfitting and improving training efficiency. Model training results showed that the recognition rate of this machinery recognition network to the demonstration dataset reached 98.5%. In comparison, the recognition rates of LeNet and AlexNet under the same conditions were 81% and 98.8%, respectively. In terms of model recognition efficiency, it took AlexNet 60 h to complete training and 0.3 s to recognize 1 image, whereas the proposed machinery recognition network took only half that time to complete training and 0.1 s to recognize 1 image. To further verify the practicability of this model, 6 types of images, with 200 images in each type, were randomly selected and used for testing; results indicated that the average recognition recall rate of various types of machinery images was 98.8%. In addition, the model was robust to illumination, environmental changes, and small-area occlusion, and thus was competent for intelligent image recognition of subsoiling operation monitoring systems.
Keywords: agricultural machinery, monitoring system, automatic image recognition, convolutional neural network
DOI: 10.25165/j.ijabe.20181104.3454

Citation: Yang K, Liu H, Wang P, Meng Z J, Chen J P. Convolutional neural network-based automatic image recognition for agricultural machinery. Int J Agric & Biol Eng, 2018; 11(4): 200-206.

Keywords


agricultural machinery, monitoring system, automatic image recognition, convolutional neural network

Full Text:

PDF

References


He Y, Nie P C, Liu F. Advancement and trend of internet of things in agriculture and sensing instrument. Transactions of the CSAM, 2013; 44(10): 216–226. (in Chinese)

Qin H B, Li D L, Guo L. Recent advances in development and key technologies of internet of things in agriculture. Journal of Agricultural Mechanization Research, 2014; 4: 246–248. (in Chinese)

Li J, Guo M R, Gao L L. Application and innovation strategy of agricultural internet of things. Transactions of the CSAE, 2015; 31(S2): 200–209. (in Chinese)

Liu Y H, Yuan Y W, Zhang J N, Wang F Z, Niu K. Design and experiment of remote management system for subsoiler. Transactions of the CSAM, 2016; 47(S1): 43–48. (in Chinese)

Zhang X D. Design and implementation of subsoiling monitoring and service system of agricultural machinery based on the android. Shandong Agricultural University, 2016. (in Chinese)

Yin Y X, Meng Z J, Mei H B, Luo C H. Study on tilling depth detection method based on attitude measurement for subsoiler. National Engineering Research Center for Information Technology in Agriculture, Beijing, China, 2015; pp.1331–1337. (in Chinese)

Deng J Z, Li M, Yuan Z B, Jing J, Huang H S. Feature extraction and classification of Tilletia diseases based on image recognition. Transactions of the CSAE, 2012; 28(3): 172–176. (in Chinese)

Wen Z Y, Cao L P. Image recognition of navel orange diseases and insect pests based on compensatory fuzzy neural networks. Transactions of the CSAE, 2012; 28(11): 152–157. (in Chinese)

Tan K Z, Chai Y H, Song W S, Cao X D. Identification of soybean seed varieties based on hyperspectral image. Transactions of the CSAE, 2014; 30(9): 235–242.

Tao H W, Zhao L, Xi J, Yu L, Wang T. Fruits and vegetables recognition based on color and texture features. Transactions of the CSAE, 2014; 30(16): 305–311. (in Chinese)

Qian J P, Li M, Yang X T, Wu B G, Zhang Y, Wang Y N. Yield estimation model of single tree of Fuji apples based on bilateral image identification. Transactions of the CSAE, 2013; 29(11): 132–138. (in Chinese)

Jia H L, Wang G, Guo M Z, Shah D, Jiang X M, Zhao J L. Methods and experiments of obtaining corn population based on machine vision. Transactions of the CSAE, 2015; 31(3): 215–220.

Zhang T M, Zhuang X L. Identification and navigation system of farmland path for high-clearance vehicle based on DM642. Transactions of the CSAE, 2015; 31(4): 160–167. (in Chinese)

Lecun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015; 521(7553): 436–444.

Schmidhuber J. Deep learning in neural networks: An overview. Neural Networks, 2014; 61: 85–117.

Haykin S, Kosko B. Gradient based learning applied to document recognition. Wiley-IEEE Press, 2009; 86(11): 306–351.

Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. International Conference on Neural Information Processing Systems. Curran Associates Inc. 2012; pp.1097–1105.

Dan C C, Meier U, Gambardella L M, Schmidhuber J. Convolutional Neural Network Committees for Handwritten Character Classification. International Conference on Document Analysis and Recognition. IEEE, 2011; pp.1135–1139.

Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Angueloy D, et al. Going deeper with convolutions. IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 2015; pp.1–9.

Bluche T, Ney H, Kermorvant C. Feature extraction with convolutional neural networks for handwritten word recognition. International Conference on Document Analysis and Recognition. IEEE, 2013; pp.285–289.

He H, Shao Z, Tan J. Recognition of car makes and models from a single traffic-camera image. IEEE Transactions on Intelligent Transportation Systems, 2015; 16(6): 3182–3192.

Liu Z, Luo P, Qiu S, Wang X, Tang X. DeepFashion: Powering robust clothes recognition and retrieval with rich annotations. IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 2016; pp.1096–1104.

Noda K, Yamaguchi Y, Nakadai K, Okuno H G, Ogata T. Audio-visual speech recognition using deep learning. Applied Intelligence, 2015; 42(4): 722–737.

Bahdanau D, Chorowski J, Serdyuk D, Brakel P, Bengio Y. End-to-end attention-based large vocabulary speech recognition. IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2016; pp.4945–4949.

Hu B, Lu Z, Li H, Cai Q, Chen Q. Convolutional neural network architectures for matching natural language sentences. Advances in Neural Information Processing Systems, 2015; 3: 2042–2050.

Bojar O, Chatterjee R, Federmann C, Graham Y, Haddow B, Huck M, et al. Findings of the 2016 Conference on Machine Translation. Proceedings of the first conference on machine translation. Association for Computational Linguistics, Berlin, Germany, 2016; pp.131–198.




Copyright (c) 2018



2023-2026 Copyright IJABE Editing and Publishing Office