In 1941, science-fiction writer Isaac Asimov stated "The Three Laws of Robotics," in his short story "Runaround."
Law One: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Law Two: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
Law Three: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These laws come from the world of science fiction, but the real world is catching up. This month, a law firm gave Pittsburgh's Carnegie Mellon University $10 million to explore the ethics of artificial intelligence — or AI. This comes after industry leaders recently joined together to form the group called the Partnership on Artificial Intelligence to Benefit People and Society.
Peter Kalis is chairman of the law firm, K&L Gates. He says technology is dashing ahead of the law, leading to questions that were never taken seriously before. Such as what happens when you make robots that are smart, independent thinkers — and then try to limit their autonomy?
"One expert said we'll be at a fulcrum point when you give an instruction to your robot to go to work in the morning and it turns around and says, 'I'd rather go to the beach.' Or, more perilously, if we were to launch a robot on the battlefield and all of the sudden it took a more partial liking to the enemy than it did to its human sponsor," Kalis says.
He says that one day we'll want laws to keep our free-thinking robots from running wild — but we'll also have to weigh such laws against the U.S. Constitution.
"It says that every person should benefit from equal protection under the law. Well, I don't think anyone contemplated that person would include an artificially intelligent robot," Kalis says. "Yet I hear people seriously maintaining that artificially intelligent robots ought to replace judges. When we get to that point, it's a matter of profound constitutional and social consequence for any country, any nation which prizes the rule of law."
With the law firm's gift, Carnegie Mellon President Subra Suresh says the university will be able to dig into issues now emerging within automated industries.
"Take driverless cars," he says. "If there's an accident involving a driverless car, what policies do we have in place? What kind of insurance coverage do they have? And who needs to take insurance?"
As it is, people can take a ride in a driverless car in Pittsburgh where Uber uses the city as a testing ground for the company's driverless cars. Suresh says he's familiar with the program, but still has questions as a passenger.
"The mayor of Pittsburgh and I took the inaugural ride about a couple of months ago," Suresh tells NPR's Audie Cornish. "We were talking about this, you know, if somebody came and hit us now, are we liable or is somebody else liable? The clarification is not there yet."
The issues go beyond self-driving cars and renegade robots. Inside the next generation of smartphones, in those chips embedded in home appliances, and the ever-expanding collection of personal data being stored in the "cloud," questions about what's right and wrong are open to study.
So are Asimov's three laws of robotics all there is to govern AI right now — and is it necessary to have a moral guideline that everyone can understand?
"I think putting all three laws into one: Do no harm, could be the very first one," Suresh says.
He says people today are at "an interesting point in the intersection of humans and technology" — one they don't have any prior experience with.