Estimating Rice Yield with a Convolutional Neural Network Model

The demand for rice across the world has been projected to increase considerably by 2050, thereby needing viable intensification of present croplands.

Estimating Rice Yield with a Convolutional Neural Network Model
Estimating rice yield with a convolutional neural network (CNN) model, using ground-based digital images. Image Credit: Okayama University.

Currently, scientists from Japan have made major advances by coming up with deep-learning algorithms that could quickly estimate rice yield via the analysis of thousands of photographs.

The model displayed high precision throughout diverse conditions and cultivars, thereby exceeding earlier methods, while efficiently detecting yield variations between cultivars and also with various water management practices.

With the increase in global demand for staple crop products that has been projected to considerably increase by 2050 as a result of population growth, increasing per capita income, and the rising use of biofuels, it is essential to adopt viable agricultural intensification practices in the present croplands for this demand to be fulfilled.

However, estimation processes presently employed in the global South remain insufficient. Conventional methods like self-reporting and crop cutting have their restrictions, and remote sensing technologies are not used completely in this context.

However, recent progress in artificial intelligence and machine learning, especially deep learning with convolutional neural networks (CNNs), provides encouraging solutions. For the scope of this new technology to be explored, scientists from Japan performed a study concentrating on rice.

They utilized ground-based digital images captured at the harvesting stage of the crop, integrated with CNNs, to estimate the yield of rice. Their study appeared online on 29th June 2023 and was reported on 28th July 2023 in Volume 5 of Plant Phenomics.

We started by conducting an extensive field campaign. We gathered rice canopy images and rough grain yield data from 20 locations in seven countries to create a comprehensive multinational database.

Dr Yu Tanaka, Study Lead Author and Associate Professor, Graduate School of Environmental, Life, Natural Science and Technology, Okayama University

With the help of digital cameras, the images were captured, which could collect the required data from a distance ranging from 0.8–0.9 m, vertically downwards from the rice canopy.

With the help of Dr. Kazuki Saito of the International Rice Research Institute (formerly Africa Rice Center) and other collaborators, the research group was successful in making a database of 4,820 yield data of harvesting plots and 22,067 images surrounding several production systems, rice cultivars, and crop management practices.

Further, a CNN model was advanced to evaluate the grain yield for each of the gathered images. The research group utilized a visual occlusion technique to envision the additive impact of different regions in the rice canopy images.

It included masking specific parts of the images and noted how the model’s yield estimation altered in response to the masked regions.

The understanding gained from this technique enabled the scientists to comprehend how the CNN model interpreted several features in the rice canopy images, thereby impacting its precision and its potential to differentiate between yield-contributing components and non-contributing elements in the canopy.

The model performed well, describing nearly 68%–69% of yield variation in the validation and test datasets. Study results stressed the significance of panicles—loose-branching clusters of flowers—in yield estimation via occlusion-based visualization.

The model had the potential to forecast yield precisely at the time of the ripening stage, thereby identifying mature panicles, and also detecting cultivar and water management variations in yield in the prediction dataset. However, its accuracy reduced as image resolution decreased.

However, the model proved strong, displaying good precision at various shooting angles and times of day.

Overall, the developed CNN model demonstrated promising capabilities in estimating rough grain yield from rice canopy images across diverse environments and cultivars. Another appealing aspect is that it is highly cost effective and does not require labor-intensive crop cuts or complex remote-sensing technologies.

Dr. Yu Tanaka, Study Lead Author and Associate Professor, Graduate School of Environmental, Life, Natural Science and Technology, Okayama University

The study stresses the ability of CNN-based models to track rice productivity at regional scales. However, the model’s precision might vary under various conditions, and additional research must concentrate on adapting the model to low-yielding and rainy environments.

Also, the AI-based method has been made available to researchers and farmers via a simple smartphone application, thus hugely enhancing the accessibility of the technology as well as its real-life applications.

The name of this application is “HOJO,” and it is already available on Android and iOS. The scientists believe that their work will result in better management of rice fields and help accelerate breeding programs, thereby adding positively to global food production and sustainability initiatives.

Journal Reference:

Tanaka, Y., et al. (2023) Deep Learning Enables Instant and Versatile Estimation of Rice Yield Using Ground-Based RGB Images. Plant Phenomics. doi.org/10.34133/plantphenomics.0073.

Source: http://www.okayama-u.ac.jp/index_e.html

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.