Gt_robot_trick_1

Robots act like squirrels to trick other robots

GEORGIA TECH (US) — Robots have learned how to deceive each other by copying squirrels and birds.

Ronald Arkin, a professor at the Georgia Institute of Technology, and his team learned by reviewing biological research results that squirrels gather acorns and store them in specific locations. The animal then patrols the hidden caches, routinely going back and forth to check on them.

When another squirrel shows up, hoping to raid the hiding spots, the hoarding squirrel changes its behavior. Instead of checking on the true locations, it visits empty cache sites, trying to deceive the predator.

Arkin and his PhD student Jaeeun Shim implemented the same strategy into a robotic model and demonstration. The deceptive behaviors worked. The deceiving robot lured the “predator” robot to the false locations, delaying the discovery of the protected resources.

“This application could be used by robots guarding ammunition or supplies on the battlefield,” says Arkin. “If an enemy were present, the robot could change its patrolling strategies to deceive humans or another intelligent machine, buying time until reinforcements are able to arrive.”

Arkin and his student Justin Davis have also created a simulation and demo based on birds that might bluff their way to safety. In Israel, Arabian babblers in danger of being attacked will sometimes join other birds and harass their predator. This mobbing process causes such a commotion that the predator will eventually give up the attack and leave.

Arkin’s team investigated whether a simulated babbler is more likely to survive if it fakes or feigns strength when it doesn’t exist. The team’s simulations, based on biological models of dishonesty and the handicap principle, show that deception is the best strategy when the addition of deceitful agents pushes the size of the group to the minimum level required to frustrate the predator enough for it to flee.

He says the reward for deceit in a few of the agents sometimes outweighs the risk of being caught.

“In military operations, a robot that is threatened might feign the ability to combat adversaries without actually being able to effectively protect itself,” says Arkin. “Being honest about the robot’s abilities risks capture or destruction. Deception, if used at the right time in the right way, could possibly eliminate or minimize the threat.”

From the Trojan Horse to D-Day, deception has always played a role during wartime. In fact, there is an entire Army field manual on its use and value in the battlefield. But Arkin is the first to admit that there are serious ethical questions regarding robot deception behavior with humans.

“When these research ideas and results leak outside the military domain, significant ethical concerns can arise,” notes Arkin. “We strongly encourage further discussion regarding the pursuit and application of research on deception for robots and intelligent machines.”

The Office of Naval Research funded the research, which is highlighted in the November/December 2012 edition of IEEE Intelligent Systems.

Source: Georgia Tech

chat0 Comments

You are free to share this article under the Creative Commons Attribution-NoDerivs 3.0 Unported license.

0 Comments

We respect your privacy.