When I first viewed the video of the scientist at Boston Robotics pushing and kicking their humanoid robot around, my first thoughts were that this video would one day be used to justify the robotic revolution that enslaves humanity, but then I remembered that Asimov’s laws of robotics would protect us from such an event ever happening… New research suggests that this type of tough love might actually make robots better through so-called “adversity training.”

 

Presented Nov. 4 at the International Conference on Intelligent Robots and Systems a study by researchers from the USC Viterbi School of Engineering detailed how training a robot with human adversity added in could improve the way robots grasp objects much faster than simple repetitive machine learning training.

 

PHD Student Jiali Duan (left) and Stefanos Nikolaidis (right), an assistant professor in computer science, use reinforcement learning, a technique in which artificial intelligence programs “learn” from repeated experimentation. Image Credit: Haotian Mai.

 

“This is the first robot learning effort using adversarial human users,” said study co-author Stefanos Nikolaidis, an assistant professor of computer science.  “Picture it like playing a sport: if you’re playing tennis with someone who always lets you win, you won’t get better. Same with robots. If we want them to learn a manipulation task, such as grasping, so they can help people, we need to challenge them.”

 

Normally, a robot would be trained to perform a task by repeating that task over and over with minor changes made to improve autonomy, forcing the system to learn through brute force. When training using the tough love method, the system is presented with a unique challenge each time. This allows the system to learn to adapt and adjust to the random interaction that humans can bring into the training.

 

“The experiment went something like this: in a computer simulation, the robot attempts to grasp an object. The human, at the computer, observes the simulated robot’s grasp. If the grasp is successful, the human tries to snatch the object from the robot’s grasp, using the keyboard to signal direction,” the team wrote in a press release. “Adding this element of challenge helps the robot learn the difference between a weak grasp (say, holding a bottle at the top), versus a firm grasp (holding it in the middle), which makes it much harder for the human adversary to snatch away.”

 

“It was a bit of a crazy idea,” admits Nikolaidis, but it worked.

 

The study demonstrated this success by showing that when the system was trained with a human adversary, the robot rejected unstable grasps more often, and was able to quickly learn more robust grasping techniques to ensure a more firm grip that prevents the object from being slapped away by the human. In experiments, the team showed that when using a human adversary, the robot was able to achieve a 52-percent success rate when grasping an object. When the human was working as an accomplished, the robot was only successful at grasping the object about 26-percent of the time.

 

“The robot learned not only how to grasp objects more robustly, but also to succeed more often in with new objects in a different orientation because it has learned a more stable grasp,” said Nikolaidis. “The robot tries to pick up stuff and, if the human tries to disrupt, it leads to more stable grasps. And because it has learned a more stable grasp, it will succeed more often, even if the object is in a different position. In other words, it’s learned to generalize. That’s a big deal.”

 

Nikolaidis hopes to have a real-world example of this new training system up and running with a commercial robotic arm sometime in 2020, but lots of work will need to take place before that happens. He says that the friction and noise that is created by the robot’s joints causes issues when training, and can skew data that is used to refine the robot’s movements. Additionally, a balance between adversarial training and traditional training will need to be found before the full potential of robotic machine learning can see its full potential.

 

How do you feel about beating robots into submission to make them perform their tasks more efficiently? I for one think its a great idea, and am going to fully rely on Asimov’s laws of robotics to save me from the robotic uprising that is likely to result from this type of training. In all seriousness though, there is massive potential here, and one day we all might train our personal assistant robots by kicking them when they need to perform better.

 

“I think we’ve just scratched the surface of potential applications of learning via adversarial human games,” said Nikolaidis. “We are excited to explore human-in-the-loop adversarial learning in other tasks as well, such as obstacle avoidance for robotic arms and mobile robots, such as self-driving cars.”

 

Source: USC.edu