|Title:||High-throughput image analysis framework for fruit detection, localization and measurement from video streams||Authors:||TA-TE LIN
Huang, Yi Hsuan
Huang, Yi Hsuan
|Keywords:||Automation | Computer vision | Object detection | Object tracking | Phenotyping;Automation; Computer vision; Object detection; Object tracking; Phenotyping||Issue Date:||1-Jan-2019||Source:||2019 ASABE Annual International Meeting||Abstract:||
© 2019 ASABE Annual International Meeting. All rights reserved. Food crises and security issues are getting worse due to climate change and population growth. One of the solutions being sought is the use of efficient breeding systems, which requires accurate and detailed phenotyping of fruits and plants. However, traditional phenotyping methods are time consuming, labor intensive and prone to human error. Therefore, measuring the morphological and physiological parameters of fruits automatically is highly recommended. In this work, a high-throughput technique for fruits detection, localization and measurement from video streams using computer vision and deep neural networks is proposed. In contrast with other works that were developed for single type of fruits, a versatile method is proposed herein that can be applied for different types of fruits using a vision system to scan through plants row by row in a greenhouse. A real-time object detection algorithm using YOLOv2, a deep neural network-based detector, is used for fruit detection and localization on video frames with a hit rate of 84.98%. An individual fruit tracking algorithm is applied throughout the video stream to perform tracking of multiple fruits. The online tracking algorithm includes feature matching, optical flow and projective transformation optimized by occlusion handling techniques such as by applying threshold indices and denoising. On the other hand, the offline tracking algorithm uses a voting method to reduce the false alarms caused by the object detector. Finally, phenotyping information such as fruit counts, ripening stage, fruit size, and 2D spatial distribution maps were obtained. The proposed framework has demonstrated its efficacy in obtaining satisfactory phenotyping information that is useful for production management as well as its potential utilization in robotic operations.
|URI:||https://scholars.lib.ntu.edu.tw/handle/123456789/430857||DOI:||10.13031/aim.201900487||SDG/Keyword:||Automation; Climate change; Computer hardware description languages; Computer vision; Deep neural networks; Errors; Object detection; Object recognition; Physiological models; Population statistics; Tracking (position); Video streaming; Detection and localization; High-throughput technique; Object detection algorithms; Object Tracking; Phenotyping; Physiological parameters; Projective transformation; Spatial distribution map; Fruits
|Appears in Collections:||生物機電工程學系|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.