The study combined computer vision (you-only-look-once(YOLOv8)) with statistical models to predict harvest success probability and support to selection of the best robotic approach direction, enabling more intelligent, environment-aware decision-making.
Automated harvesting robots are crucial for addressing agricultural labor shortages, with research advancing since the 1980s. Previous work has developed various robotic end-effectors, vision systems for fruit detection, and integrated autonomous platforms. However, a significant gap remains in achieving consistent, reproducible harvest success in real-world environments, which hinders commercial adoption.
This study filled that gap by establishing a data-driven, quantitative framework. Through real-world experiments with a tomato-harvesting robot, it statistically identified key structural factors (like peduncle placement) affecting success. It then combined this model with computer vision (YOLOv8-based detection and segmentation) to enable robots to estimate harvest probability and inform the choice of approach direction, thereby improving reliability and intelligence.
Experimental Setup and Methods
This study conducted harvesting experiments in a solar-based plant factory cultivating medium-sized tomatoes, where plants were arranged in rows with dedicated rail aisles for robot access. The custom-developed tomato-harvesting robot consisted of a rail-mounted vehicle, a hybrid manipulator (combining a two-axis orthogonal system and a four-degree-of-freedom articulated arm), a three-fingered gripper end-effector, and a red-green-blue and depth (RGB-D) camera.
Its software used a distributed robot operating system (ROS)-based system, employing YOLOv8 for fruit detection and semantic segmentation, and a three-state (search, analysis, harvest) semi-autonomous operation cycle. The robot attempted to harvest 100 tomatoes from three predefined approach directions (front, left, and right).
The core methodology involved a two-step analysis of "ease of harvesting." First, statistical analyses were performed. Chi-square tests screened 26 binary explanatory variables describing peduncle characteristics (such as position relative to fruit) and positions of surrounding fruits for their association with harvest success and direction.
Significant variables were then fed into a logistic regression model to quantify their impact on success probability. Second, an image-based analysis pipeline was developed. Using a YOLOv8 model for fruit detection and stem/peduncle segmentation, the system automatically extracted the relevant spatial features for a target tomato.
These features were input into the pre-trained logistic regression model to estimate real-time harvest success probability and evaluate the most suitable approach direction, thereby enabling environment-aware robotic decision-making.
System Performance Evaluation and Discussion
The experimental results from harvesting 100 tomatoes with the developed robot showed an 81 % success rate.
A key finding from the statistical analysis was that the spatial arrangement of the peduncle (the stem connecting the fruit) was the dominant factor influencing success. Specifically, a peduncle positioned in front of the fruit (in the robot's path) significantly reduced the success probability, while a peduncle located above the fruit substantially increased it.
These relationships were quantified using a logistic regression model built from the experimental data. The model successfully identified high-probability targets, achieving an area under the curve (AUC) of 0.85. Furthermore, the analysis of approach direction (front, left, right) confirmed that the robot effectively selected paths to avoid spatial obstacles, such as choosing a left approach when fruits were on the right.
To apply these findings autonomously, an image-based method was developed using the YOLOv8 model to detect fruits and segment the peduncles in real-time. The extracted spatial features were fed into the statistical model to estimate a harvest success probability for each target.
To apply these findings in a more autonomous manner, an image-based method was developed using the YOLOv8 model to detect the fruit and segment the peduncle in real time. The extracted spatial features were fed into the statistical model to estimate a harvest success probability for each target. Evaluating this system showed its practical value.
By having the robot selectively attempt harvesting only where the predicted success probability exceeded 60 %, the operational success rate on those selected fruits rose to 92 %, compared with a lower success rate when all detected fruits were attempted indiscriminately.
The authors discussed limitations, such as variability in fruit detachment and the current need for some manual input. They suggested that future work should focus on virtual simulations (digital twins) for more robust testing and algorithm refinement.
A Data-Backed Step Toward Reliable Agricultural Robots
This study successfully developed a data-driven framework to enhance the reliability of autonomous tomato harvesting. By statistically linking key structural factors (especially peduncle position) to harvest success and integrating this model with real-time YOLOv8-based vision, the robot can estimate the success probability and choose the optimal approach directions.
This intelligence boosted the operational success rate to 92 % for selected fruits, a significant leap from a 56 % baseline. The work demonstrates a crucial step toward practical, environment-aware agricultural robots, with future refinement anticipated through digital twin simulations.
Journal Reference
Fujinaga, T. (2025). Realizing an intelligent agricultural robot: An analysis of the ease of tomato harvesting. Smart Agricultural Technology, 12, 101538. DOI:10.1016/j.atech.2025.101538
https://www.sciencedirect.com/science/article/pii/S2772375525007695?via%3Dihub
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.