In the Industry 5.0 era, ensuring collision safety in human–robot collaboration has become essential, as cobots are increasingly sharing workspaces with humans.
While productivity was the main motivation for robot adoption, the growing physical proximity between humans and machines has increased the focus on safety.
Surveys show that 70% of manufacturers prioritize safety when adopting cobots, and 78% of workers feel more comfortable when robots include power- and force-limiting (PFL) features. Together, these findings highlight safety as a central factor in both technology adoption and broader organizational decision-making.
To address these concerns, international standards like ISO 10218 and ISO/TS 15066:2016 define safety requirements, including limits on permissible impact energy and force during human–robot contact.
Many cobots incorporate passive safety mechanisms, such as PFL, which reduce torque and speed during collisions. While these features help mitigate impact severity, they cannot fully prevent collisions.
As a result, active safety measures, such as real-time collision detection and immediate response, are critical.
A structured safety approach is therefore needed, encompassing collision prediction, pre-collision avoidance strategies, collision event handling, and post-collision actions to support effective human–robot collaboration.1,2
Download the PDF of the article here
Collision Detection
In collaborative robotics, collision safety is enhanced through effective detection methods that identify contact at various points on the robot. Model-based, sensorless techniques detect collisions by monitoring deviations between expected robot dynamics and actual joint torques or motor currents. These approaches offer real-time responsiveness without requiring additional hardware, although their sensitivity can decrease due to payload variations or friction.
Power- and force-limiting (PFL) strategies further improve safety by capping torque and speed in accordance with ISO 15066. However, they do not actively detect impacts. Meanwhile, intelligent, learning-based techniques leverage machine learning and sensor data to classify, detect, and interpret collisions with a high degree of accuracy.1
Advancing Collision Detection with Intelligent and Hybrid Approaches
A paper recently published in IEEE Access proposed an intelligent deep learning framework for robot–human collision detection and identification of specific human body part involved in collision using internal robot sensors. The proposed system relied on joint torque sensors and tool center point data, eliminating the external sensing requirement.
To handle class imbalance, researchers introduced a hybrid time-series augmentation method combining a furthest-neighbor algorithm-based synthetic minority oversampling technique and a time-series generative adversarial network. The framework integrated a multi-task neural network with temporal convolution, bidirectional long short-term memory (LSTM), and attention mechanisms to perform collision detection and collision location classification.
Experimental validation performed using an industrial robot using a humanoid dummy showed high performance, achieving 99.5% detection accuracy and strong classification results. The approach demonstrated an effective, real-time, sensorless safety solution aligned with Industry 5.0 goals.1
Another paper published in Robotics and Computer-Integrated Manufacturing introduced a collision detection approach for collaborative assembly operations involving high-payload robots. Researchers proposed using an end-effector force/torque sensor combined with motor current measurements to create a reliable and redundant detection system compliant with ISO/TS 15066:2016. A linear model was analyzed to guide the detection algorithm design and to understand higher payloads’ influence on system behavior. This work addressed human-robot collaboration safety challenges, particularly where large payloads increase inertia and introduce complexities like internal oscillations and low-pass filtering effects on external forces. These factors make accurate and timely collision detection difficult.3
Building on this, researchers introduced a set of system requirements, a contact model, and a collision detector. The contact model was used to examine how high-payload conditions influence detection performance. Experimental validation on a high-payload robot demonstrated the method’s effectiveness across a range of motions and payload scenarios.
The results highlighted how detection strategy, robot pose, and stopping mechanisms affect both peak and steady-state collision forces. Researchers also found that power- and force-limiting (PFL) approaches offered clear advantages over speed and separation monitoring, particularly in terms of cycle efficiency and operator comfort.
Overall, the study concluded that fine-tuning detection parameters to suit specific applications can further enhance both safety and performance in industrial human–robot collaboration.3
Collision Avoidance
Collision avoidance, a crucial aspect of cobot safety, focuses on preventing contact before it occurs. Various methods detect obstacles and human presence in the workspace. These include predicting human motion and assessing collision risks, along with manipulator-based strategies that depend on robot positioning.
Contact-based approaches use capacitive and torque sensors to detect forces acting on the robot, while non-contact methods rely on vision systems. Cameras and RGB-D sensors track human motion by analyzing skeletal models and using machine learning to predict movement. Object detection can be improved further through multiple cameras and color-based filtering. For greater accuracy, wearable devices such as inertial measurement units (IMUs) can track limb positions after calibration. Regardless of the method used, high data acquisition rates of around 100 Hz are critical for real-time responsiveness.4
A paper published in Computers & Industrial Engineering presented a three-dimensional (3D) collision avoidance strategy for collaborative robots, which was validated experimentally. The approach was based on defining tangent planes to the sphere-swept volumes (SSVs) of the robot and human operator, enabling the cobot to plan safe trajectories in real time. A key novelty was the marker-less motion capture system’s integration to track the operator’s position within the collaborative environment, allowing accurate and dynamic robot movements’ adaptation without external markers. Researchers also investigated the collaboration area impact on system performance, particularly the interference between the robot and human operator. By varying the collaboration area, they analyzed its effects on the collaboration parameter and on the overall makespan.5
Results showed that increasing the collaboration area could reduce performance due to higher interference. However, the collision avoidance strategy implementation mitigated these effects by decreasing the required number of emergency stops. Researchers validated the strategy through simulation and experimental tests in a real collaborative cell, including the human operator, the cobot, and the motion capture camera. They concluded that the proposed 3D collision avoidance method effectively improved safety while maintaining productivity, indicating the advantages of combining real-time tracking with advanced trajectory planning in collaborative robotic environments.5
Another paper published in IFAC-PapersOnLine presented a collision avoidance strategy for robots, validated in an experimental setup. The method relied on geometric conditions, allowing the robot to maintain speed while altering its motion along tangent lines between bounding volumes to efficiently avoid obstacles. Researchers aimed to provide a fast, real-time collision avoidance approach, investigate the collaboration area effect on system performance, and compare passive versus active avoidance strategies. Results displayed that increasing the collaboration area led to more interference between resources, reducing the collaboration parameter (c%). Implementing an active collision avoidance strategy decreased idle time and mitigated the c% reduction. The work also confirmed that lower collaboration levels increased the overall makespan (T), indicating the trade-off between safety, collaboration, and productivity.6
Why We Need Both
In cobot safety, collision detection and collision avoidance together form a layered safety approach. Collision detection focuses on identifying contact after it occurs, using methods like sensorless torque monitoring, force/torque sensors, or intelligent learning-based frameworks. These approaches enable real-time responses to reduce impact severity, specifically when payloads or robot dynamics complicate detection. Conversely, collision avoidance prevents contact entirely through predictive strategies, vision systems, motion capture, and trajectory planning. Real-time human movement tracking and adaptive path planning ensures that the cobot operates safely within shared workspaces. Depending on one method alone is insufficient as passive mechanisms like PFL mitigate impacts but cannot prevent collisions, while avoidance strategies alone may fail in unpredictable scenarios. Integrating detection and avoidance provides a redundant, complementary safety framework, improving human comfort and productivity in dynamic Industry 5.0 environments.1-6
The Future of Cobot Safety
Ensuring cobot safety requires a holistic approach that balances both proactive and reactive strategies. By integrating advanced sensing technologies, intelligent algorithms, and adaptive planning, human–robot interactions can remain secure while maintaining efficiency. This balanced approach supports trust, sustains operational continuity, and enables seamless collaboration within modern industrial environments.
We also explore the difference between cobots and traditional robotic arms here
References and Further Reading
- Yoon, J., & Lee, Z. (2026). Collision Detection and Body-Part Classification in Collaborative Robots Using a Multi-Task Deep Learning Model With Hybrid Time-Series Augmentation. IEEE Access, 14, 9758-9777. DOI: 10.1109/ACCESS.2026.3654108, https://ieeexplore.ieee.org/abstract/document/11352872
- Czubenko, M., & Kowalczuk, Z. (2021). A simple neural network for collision detection of collaborative robots. Sensors, 21(12), 4235. DOI: 10.3390/s21124235, https://www.mdpi.com/1424-8220/21/12/4235
- Katsampiris-Salgado, K. et al. (2024). Collision detection for collaborative assembly operations on high-payload robots. Robotics and Computer-Integrated Manufacturing, 87, 102708. DOI: 10.1016/j.rcim.2023.102708, https://www.sciencedirect.com/science/article/pii/S0736584523001837
- Neri, F., Forlini, M., Scoccia, C., Palmieri, G., & Callegari, M. (2023). Experimental evaluation of collision avoidance techniques for collaborative robots. Applied Sciences, 13(5), 2944. DOI: 10.3390/app13052944, https://www.mdpi.com/2076-3417/13/5/2944
- Boschetti, G., Faccio, M., Granata, I., & Minto, R. (2023). 3D collision avoidance strategy and performance evaluation for human–robot collaborative systems. Computers & Industrial Engineering, 179, 109225. DOI: 10.1016/j.cie.2023.109225, https://www.sciencedirect.com/science/article/pii/S0360835223002498
- Boschetti, G., Bottin, M., Faccio, M., Maretto, L., & Minto, R. (2022). The influence of collision avoidance strategies on human-robot collaborative systems. Ifac-Papersonline, 55(2), 301-306. DOI: 10.1016/j.ifacol.2022.04.210, https://www.sciencedirect.com/science/article/pii/S2405896322002117
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.