Ethical Killing Machines?
If you’ve been paying attention to the news lately one of the things you hear about are machines on the battle field – or above it. For the most part these machines are controlled remotely by people who make the actual decision to “fire” or not. But increasingly there is interest in machines, call them robots if you like, that will make the “fire or not” decision on their own. These machines will be controlled by software. But just how do you program a machine to act ethically?
In fiction we have long had Isaac Asimov's “Three Laws of Robotics” but in real life its not that easy. Ronald Arkin, a professor of computer science at Georgia Tech, is working on this problem. He’s not the only one but you can read about him and some of the related issues at this article titled “Robot warriors will get a guide to ethics” There are also some links at his web site at Georgia Tech. It’s a tough issue. The ethical questions involved in warfare are tough in and of themselves but getting a computer to understand or at least to properly process the inputs and make an “ethical” decision raises the level of complexity.
I think this is a piece of the growing importance of discussing ethics in computer science programs. I know that many under graduate programs have an ethics course requirement. The masters program I was in had a required ethics course. But I think we need to start having these discussions in high school (or younger). Ethical behavior is something best learned young.
Follow up: Chad Clites sent me a link to an article called Plan to teach military robots the rules of war that relates to this post.
Comments
Anonymous
May 19, 2009
This is a very difficult indeed given that ethical behavior, or at least the perception of what is ethical, is based on individual experience and interpretation. How can one hope to get two developers to agree on a common idea of what is ethical, much less an entire development team. Who then gets to decide? Do we take a utilitarian view and program these 'warriors' to do what is best for the greatest number of people? Who gets to determine that?Anonymous
May 19, 2009
The comment has been removedAnonymous
May 19, 2009
The comment has been removedAnonymous
May 19, 2009
The comment has been removedAnonymous
May 20, 2009
An advantage to the war robot is we can turn them off (hopefully) when not needed. Once people are programmed for war they are a little resistant to being turned off. The war programming in people may fade with time but it is still there.Anonymous
May 23, 2009
@Alfred; true, soldiers are programed to act in a specific manner, but humans are capable of breaking the rules when the situation warrants. How does one create rules to tell a robot when to break the rules? Is such a list of rules finite?