Posted in | News | Machine-Vision

Smartphone App Allows Users to Program Robots to Execute Routine Tasks

If low-cost robots could be programmed by any factory worker, then more numbers of factories may actually start using robotics to boost their workers’ productivity.

A smartphone app allows a user to plan a task for a robot to perform. The robot carries out the task automatically once the phone is loaded onto its docking station. (Image credit: Purdue University/C Design Lab)

The reason for this is that workers would be able to take on more different and higher-level tasks, and moreover, factories can possibly create a greater range of products.

That is the concept behind a prototype smartphone app that was recently developed by researchers at Purdue University. With the help of this novel app, a user can effortlessly program any kind of robot to execute a routine task, like picking up items from one place and delivering them to another. In addition, such a setup could take care of household chores—plants would no longer die because individuals have forgotten to water them.

Researchers at Purdue University will present their findings on the embedded app, known as VRa, on June 23rd at DIS 2019 in San Diego. The Purdue Research Foundation Office of Technology Commercialization helped in patenting the platform, and there are plans to make it available for commercial applications.

Smaller companies can’t afford software programmers or expensive mobile robots. We’ve made it to where they can do the programming themselves, dramatically bringing down the costs of building and programming mobile robots.

Karthik Ramani, Donald W. Feddersen Professor, Mechanical Engineering, Purdue University

The new app uses Augmented Reality (AR) and enables the user to either draw out a workflow directly into real space or walk out where the robot should go to carry out its tasks. It provides options on how to perform those tasks, for example, under a specific time limit, on repeat, or once a machine has completed its job.

Once the programming is done, the phone is dropped into a dock connected to the robot, by the user. Although the phone needs to be acquainted with the type of robot it is “becoming” to execute tasks, the dock can be wirelessly linked to the basic motor and controls of the robot.

The phone is, in fact, both the brain and eyes for the robot, managing its tasks and navigation.

As long as the phone is in the docking station, it is the robot. Whatever you move about and do is what the robot will do.

Karthik Ramani, Donald W. Feddersen Professor, Mechanical Engineering, Purdue University

In order to get the robot to perform an activity that involves wireless interaction with another machine or object, the user merely scans that object or machine’s QR code while programming, This successfully created a network of what is known as “Internet of Things.”

As soon as the phone is docked, the phone (as the robot) utilizes data from the QR code to function with the objects. The scientists demonstrated this with robots vacuuming and transporting objects, and watering a plant. In addition, the robot can be remotely monitored by the user via the app, and it can be made to start or stop a specific task, for example, to start a 3D-printing job or charge its battery.

When the phone is docked, the novel app offers an option to automatically record a video thus allowing the user to play it back and assess a workflow.

Thanks to Ramani’s lab, the app knows how to navigate and communicate with its environment in accordance with what the user specifies by building upon the proverbial “simultaneous localization and mapping.” Such forms of algorithms are also utilized in drones and self-driving cars.

We don’t undervalue the human. Our goal is for everyone to be able to program robots, and for humans and robots to collaborate with each other.

Karthik Ramani, Donald W. Feddersen Professor, Mechanical Engineering, Purdue University

Ever since the prototype was developed, Ramani’s laboratory has been testing it in real factory settings in order to assess user-driven applications. Eventually, the app represents a leap toward producing future “smart” factories, driven by augmented reality and artificial intelligence, that not only complement but also boost workers’ productivity instead of substituting them, informed Ramani.

The work aligns with Purdue University’s Giant Leaps celebration, acknowledging the global advancements made by the university in artificial intelligence as part of its 150th anniversary. This is one among the four themes of the year-long celebration’s Ideas Festival, engineered to demonstrate Purdue University as an intellectual hub that solves real-world problems.

The lab’s continued research is being supported by a grant from the National Science Foundation’s Future of Work at the Human-Technology Frontier program to enable humans to create, program, and collaborate with robots in a more easy way.

Anyone Can Train Their "Internet of Things"

(Video credit: Purdue University)

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.