Posted in | News | Machine-Vision

AI-Powered Marker Decoding for Enhanced Machine Vision

According to a study published in Image and Vision Computing, the University of Cordoba developed a model that uses neural networks to maximize the decoding of markers used by machines to recognize and locate things.

neural network system researchers
The researchers Manuel J. Marín, Rafael Berral and Rafael Muñoz. Image Credit: University of Córdoba

Fiducial markers are used in the design of robots to assist them in moving, recognizing things, and pinpointing their precise location. One example is Boston Dynamics' anthropomorphic robot Atlas, which looks to be sorting and exercising boxes. It is a machine vision tool for estimating the locations of objects.

They resemble the QR marking system in appearance, but they have the advantage of being able to be spotted at considerably greater distances. They are flat, high contrast black and white square codes.

To save time and money on logistics, a camera mounted on the roof enables automated package location detection utilizing these indicators. Since traditional machine vision methods that precisely find and decode markers are ineffective in low light, the system's shortcoming up until this point has been lighting conditions.

For the first time, researchers Rafael Berral, Rafael Muñoz, Rafael Medina, and Manuel J. Marín of the University of Cordoba’s Machine Vision Applications research group have used neural networks to create a system that can recognize and decode fiducial markers in challenging lighting conditions.

The use of neural networks in the model allows us to detect this type of marker in a more flexible way, solving the problem of lighting for all phases of the detection and decoding process.

Rafael Berral, PhD Student, University of Córdoba

The entire procedure is divided into three steps: marker identification, corner refining, and marker decoding, each using a separate neural network.

This is the first time a complete solution has been provided for this problem.

There have been many attempts to, under situations of optimal lighting, increase speeds, for example, but the problem of low lighting, or many shadows, had not been completely addressed to improve the process.

Manuel J. Marín, Full Professor, University of Cordoba

How to Train the Machine Vision Model

When training this model, which provides an end-to-end solution, the researchers built a synthetic dataset that accurately depicts the type of lighting situations that can occur when working with a marker system in less-than-ideal conditions.

Once trained, “the model was tested with real-world data, some produced here internally and others as references from other previous works,” the researchers indicated.

The artificially created data used to train the model and the data from adverse lighting settings in the real world are available.

The system could therefore be used today “since the code has been released and it has been made possible to test the code with any image in which fiducial markers appear,” recalled Rafael Muñoz.

Thanks to this study, machine vision applications have surmounted a new challenge: moving in the dark.

Journal Reference:

Berral-Soler, R., et. al. (2024) DeepArUco++: Improved detection of square fiducial markers in challenging lighting conditions. Image and Vision Computing. doi.org/10.1016/j.imavis.2024.105313

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.