Princeton Scientists Use ‘Invisible’ Robots to Bring Virtual Objects Into the Real World

Princeton computer scientists have unveiled a system that makes virtual objects feel real by pairing mixed reality headsets with mobile robots—and then rendering those robots invisible. The result is an experience where users can point, gesture, and watch as digital objects seem to materialize in their physical space, whether it’s a drink placed on a desk or a playful virtual bee delivering a real snack.

Woman with VR virtual reality goggles on light blue wall background.

Image Credit: Yuganov Konstantin/Shutterstock.com

Virtual and augmented reality are advancing quickly, but one stubborn gap remains: how to make interactions between the digital and physical worlds feel natural. Current systems often break immersion, reminding users that digital objects are just projections. Parastoo Abtahi and Mohamed Kari, researchers at Princeton, are tackling this head-on. Their work integrates robotics into mixed reality to erase that disconnect, aiming for a world where pixels and matter blend seamlessly.

How the Illusion Works

The heart of the research lies in a system architecture that separates the user’s immersive experience from the physical mechanics working in the background. Wearing a mixed reality headset, a user can make a simple gesture to select a virtual object—a drink, for instance—and place it on a real surface nearby.

What feels like a purely digital action is actually translated into a set of commands for a mobile robot, itself equipped with a headset. By aligning with the digital twin of the room, the robot knows its exact position and can navigate precisely to deliver the physical item to the location chosen by the user.

What makes this illusion truly convincing is the robot’s complete disappearance from view. The researchers use a rendering technique known as 3D Gaussian splatting to generate a photorealistic, constantly updated digital copy of the space. This allows the system to “paint over” the robot in real time, erasing it from the user’s perspective.

As a result, only the outcome is visible: a drink appears on the desk, or a whimsical virtual bee drops off a real bag of chips. The robotic machinery remains unseen, preserving the seamless impression that virtual objects can materialize on demand.

Interaction Techniques and Future Challenges

For the system to feel truly intuitive, communication between the human and computer had to be effortless. Instead of relying on bulky controllers or complex commands, the researchers built an interaction model around simple hand gestures. A user can point at an object across the room and signal for it to move, and the system interprets the intent instantly. This gesture-based approach is key to making the technology feel like an extension of the user’s own will rather than a tool they need to consciously operate—a central part of the team’s vision for making the technology itself “disappear.”

Still, several hurdles stand in the way of broader adoption.

Building the high-fidelity digital twin that powers the illusion remains “somewhat tedious,” requiring every object and surface in the room to be carefully scanned. Automating this step—possibly by having the robot perform the scanning itself—is one of the researchers’ next priorities.

Other challenges include improving gesture recognition for more complex tasks and ensuring the system can operate reliably in messy, unpredictable environments outside the lab. Overcoming these obstacles will be essential to move the technology from an impressive demonstration to a practical tool in everyday settings.

Conclusion

This research marks an important step toward erasing the boundaries between the virtual and physical worlds. By pairing mixed reality with an “invisible” robot, Abtahi and Kari have shown how digital intentions can seamlessly take shape as physical reality, giving users a greater sense of agency and immersion. Challenges in automation and scalability still need to be addressed, but the work clearly demonstrates a promising new direction for human-computer interaction.

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Source:

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2025, September 04). Princeton Scientists Use ‘Invisible’ Robots to Bring Virtual Objects Into the Real World. AZoRobotics. Retrieved on September 04, 2025 from https://www.azorobotics.com/News.aspx?newsID=16166.

  • MLA

    Nandi, Soham. "Princeton Scientists Use ‘Invisible’ Robots to Bring Virtual Objects Into the Real World". AZoRobotics. 04 September 2025. <https://www.azorobotics.com/News.aspx?newsID=16166>.

  • Chicago

    Nandi, Soham. "Princeton Scientists Use ‘Invisible’ Robots to Bring Virtual Objects Into the Real World". AZoRobotics. https://www.azorobotics.com/News.aspx?newsID=16166. (accessed September 04, 2025).

  • Harvard

    Nandi, Soham. 2025. Princeton Scientists Use ‘Invisible’ Robots to Bring Virtual Objects Into the Real World. AZoRobotics, viewed 04 September 2025, https://www.azorobotics.com/News.aspx?newsID=16166.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.