Novel Approach to Train Deep Learning Models for Smart Agricultural Applications

In artificial intelligence (AI), recent advances, together with drones and digital cameras, have hugely extended the frontiers of smart agriculture.

Image Credit: Anatolii Stoiko/Shutterstock.com

Precision agriculture is one attractive use case for such technologies. In this modern method of farming, the concept is to improve crop production by collecting precise data regarding plants and the state of the field and further acting accordingly.

For instance, by examining aerial images of crops, AI models have the potential to identify what parts of a field require more attention, as well as the present stage of development of the plants.

Among all the crop monitoring functions that AI can do, crop head counting remains one of the most difficult to implement. Images of crops consist of densely packed, repeating patterns that are generally irregular and overlapped, thereby making it hard for deep learning models to detect particular plant organs automatically.

One would train such models utilizing thousands of manually annotated images, in which pixels associated with crop heads are pre-specified. But in practice, annotating crop images is extremely laborious and lengthy.

To tackle the issue, a research group, including Assistant Professor Lingling Jin from the University of Saskatchewan, Canada, developed a ground-breaking method that could streamline the training and development of deep learning models.

Their method, explained in a study, could encourage a more extensive adoption of AI in agriculture.

The study was published in Plant Phenomics on February 24th, 2023.

To demonstrate their concept, the team concentrated on the identification (or “segmentation”) of wheat heads in crop images as an example use case. Their strategy turns around producing a synthetic annotated dataset.

Rather than marking pixels belonging to wheat heads in hundreds of images, they developed a comfortable method to produce artificial images in which the wheat heads are marked automatically.

Initially, the scientists recorded short videos of a wheat field and other locations without wheat plants (also known as “background” videos). From the footage of the wheat field, they extracted a small number of still frames and annotated them manually, determining all the wheat heads.

Using frames from the background videos as a canvas, they produced synthetic wheat images by pasting “cutouts” of the manually segmented wheat heads onto them. This method allowed the group to produce thousands of training images for a deep-learning model with the least effort.

The scientists also used several domain adaptation methods to improve the model, which was based on a tailored U-Net architecture. Such methods fine-tuned the algorithm to execute better on images from several real-world wheat fields, even though it was trained primarily on synthetic images.

Several tests on an open-access dataset disclosed impressive gains in accuracy.

Our approach established—and by a wide margin in performance—a new state-of-the-art model for wheat head segmentation.

Lingling Jin, Assistant Professor, University of Saskatchewan

The methods displayed in this work are not limited to determining wheat heads.

While we showed the utility of the proposed method for wheat head segmentation, it could be applied to other applications that have similar dense repeating patterns of objects, such as segmenting plant organs in other crop species or segmenting molecular components in microscopy images.

Lingling Jin, Assistant Professor, University of Saskatchewan

This study paints a bright future for gaining deep insights into agriculture and other fields.

Journal Reference

Najafian, K., et al. (2023) Semi-Self-Supervised Learning for Semantic Segmentation in Images with Dense Patterns. Plant Phenomics. https://doi.org/10.34133/plantphenomics.0025.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.