AI Masters Complex Forestry Machine in Groundbreaking Trial

Scientists at Umeå University have successfully developed an artificial intelligence-driven, self-driving forest machine that can run a 16-ton machine without human assistance. The study was conducted in cooperation with Algoryx Simulation and Skogforsk and published in the journals Robotics and Autonomous Systems and Robotics and Automation Letters.

AI Masters Complex Forestry Machine in Groundbreaking Trial
Viktor Wiberg, researcher at Algoryx Simulation and former doctoral student at Umeå University. Image Credit: Viktor Wiberg

Large volumes of training data are required for AI robot control, which is expensive and dangerous for large, heavy machinery. Pre-training in a simulated environment can resolve this, but there will always be some disparity with reality.

According to a research study conducted at Umeå University, large and complex systems can also overcome this challenge. The first successful trials have been conducted at Skogforsk's test site in Jälla, outside Uppsala. An AI was tested on its ability to steer a heavy forest machine, avoid obstacles, and stick to a predetermined path. The supercomputer at Umeå University had been used to train the AI in advance through several million training steps.

The results show that it is possible to transfer AI control to a physical forest machine after first training it in a simulated environment.

Viktor Wiberg, Researcher, Algoryx Simulation, Umeå University

Wiberg’s doctoral thesis at Umeå University forms the basis of the work.

This is the first instance of autonomous control of a complex machine like a forestry machine using artificial intelligence.

The AI Needs to be Trained in a Virtual Environment

The "deep reinforcement learning" AI technique has shown remarkable prowess in managing intricate systems beyond human capability. However, its achievements have been confined to digital systems or compact robots. Heavy machinery used in forestry, mining, and construction poses a greater challenge due to their intricate mechanical structures, often integrating hydraulics, complicating their control.

In addition, it is costly and dangerous to experimentally produce the amount of training data required to train AI models that can handle all conceivable situations.

Martin Servin, Associate Professor, Department of Physics, Umeå University

These factors lead to a large amount of research and development occurring in virtual training environments, which are similar to the simulators that have long been used to train human machine operators. The virtual environment is based on a physics simulation that accurately calculates machine dynamics and their interactions with logs and terrain.

Shows that the "Reality Gap" can be Bridged

An AI model can quickly investigate a wide range of causal relationships between scenarios, actions, and results in a digital simulation.

Martin Servin said, “In a virtual environment, the training takes place without risk of injury and without fuel consumption.”

A certain amount of reality deviates from the high level of realism in the physics models that power the simulations. One of the biggest challenges in transferring a pre-trained model to operate a physical machine is the so-called "reality gap." As a result, the AI might take unanticipated and undesirable actions.

Up until now, It has been unclear how much of a barrier the reality gap is for large, intricate machines. However, the Umeå University research study demonstrates that the divide is surmountable.

It is impressive that it actually worked. It was clear how the AI performed better and better with each trial.

Tobias Semberg, Engineer, Skogforsk Troëdsson Forestry Teleoperation Lab

The study will be presented at the International Congress for Forest Research (IUFRO) in Stockholm.

Journal References:

Wiberg, V., et al. (2024) Sim-to-real transfer of active suspension control using deep reinforcement learning. Robotics and Autonomous Systems.

Wiberg, V., et al. (2022) Control of rough terrain vehicles using deep reinforcement learning. Robotics and Automation Letters.  doi.10.1109/LRA.2021.3126904

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.