Using a geometric framework they call artificial spacetimes, the team showed that you can guide microrobots by shaping light intensity fields without the need for complex software or real-time feedback.
Borrowing ideas from general relativity and optics, this approach turns navigation into a physics problem. And it works. Real-world experiments confirmed that these static light fields could steer microrobots around obstacles and through tight spaces with precision - all without the robots needing to think.
Background
Microrobots are promising tools for everything from targeted drug delivery to microscale manufacturing. But there’s a catch: they’re tiny. That means they can’t carry much onboard computing power, if any at all.
So how do you get a swarm of these machines to go where you want them to go? One popular idea is reactive control, where robots simply respond to environmental cues like light or chemical gradients rather than following detailed commands. It’s simple and efficient, but up until now, it’s been too limited.
Most reactive systems only enable basic behaviors like moving toward a light source. And they come with problems such as a lack of predictability, difficulty avoiding obstacles, and no guarantees that the robot will actually reach its goal.
That’s where this new research is hoping to change things. Instead of trying to program intelligence into each robot, the team flipped the problem. What if the environment - not the robot - does all the work? What if the robot just follows light like a photon, and you design the “spacetime” around it to guide its path?
Turning Microrobots Into Light Rays
The key insight in this work is a surprisingly elegant and simple robot with two motors controlled by the amount of light each side receives, which behaves much like a beam of light traveling through curved space.
Formally, the researchers showed that the robot’s motion follows the same mathematical rules - geodesics - that describe how light moves in general relativity. They created a metric tensor, a kind of mathematical map that defines how space is curved, based on the robot’s speed and the light intensity. When they derived the path the robot would take using this setup, it matched exactly with how the robot actually moves.
In other words, if you find a way to control the light field successfully, and you control the robot, just like you’d control a beam of light with a lens. Only in this case, the “lens” is a carefully calculated intensity map projected onto the robot’s environment.
How They Did it
To design these control fields, the team used techniques from transformation optics (the same physics behind invisibility cloaks and high-end optical lenses). They reversed known lens profiles like GRIN and Eaton lenses to create light fields that would pull or steer the robots along desired paths.
For complex environments, like mazes or obstacle courses, they used conformal transformations. This involves mapping a difficult real-world space into a simple virtual one where control is easy, then mapping it back. The result is a custom-designed light field that naturally guides the robot through tight spaces and around hazards.
They tested this setup using silicon microrobots, built using photovoltaic techniques and propelled by electrokinetic motors. The robots were placed in a lab setup where a projector beamed calculated grayscale intensity fields onto their workspace. These fields weren’t dynamic or interactive; instead, they were completely static. But they worked. The robots responded to the light just as predicted, navigating with high accuracy and no need for processing on their part.
They even extended the system to three dimensions, showing that the same physics-based control works when steering robots through full 3D environments by preserving heading direction using cross-product-based equations.
What They Found
The experiments and simulations showed that this geometric framework actually does work remarkably well. The researchers were able to:
- Navigate mazes using only static light patterns
- Achieve fine steering control. The robots could diverge based on heading changes as small as 0.06°
- Perform sharp turns, such as 90° rotations, with high precision
- Combine motion primitives, like chaining two 90° turns to make a 180° turn
The real-world microrobots closely followed the predicted paths from the geometric model. In one example, a projected light field successfully guided a robot to complete a sharp turn of about 82°, despite wide variations in how it started. This kind of robustness is exactly what previous reactive control methods lacked.
What’s more, the system supports composable behaviors. Say, for instance, you need a robot to do something more complex, you simply need to design a new field by combining simpler ones. The robot doesn’t need to change - it will just follow the light.
A New Way to Think About Control
This approach flips existing robot control entirely on its head. Rather than designing algorithms to run on the robot, you design a spacetime for it to move through. The robot itself is simple. All the complexity is offloaded to physics and geometry.
That’s a big deal for resource-limited systems, where every byte of memory and every milliwatt of power counts. It also means you can control many robots at once, with the same field, without needing individual instructions or coordination.
The framework provides formal guarantees in that you can prove the robot will do what you want, under defined conditions, and it scales naturally to complex environments. And because it’s grounded in well-understood physics, the tools to design and optimize these systems already exist.
Journal Reference
Reinhardt, W. H., & Miskin, M. Z. (2025). Artificial spacetimes for reactive control of resource-limited robots. Npj Robotics, 3(1). DOI:10.1038/s44182-025-00058-9. https://www.nature.com/articles/s44182-025-00058-9
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.