Wednesday, July 04, 2007

The ethics of robot soldiers?

Increased roboticization of US military operations is both inevitable and, on balance, a very good idea. (See this general article on robots on the battlefield.) Along with the things on the immediate horizon such as robot surveillance and remote sensing and all that has been the exploration, in the long term, of robot fighters. In that process, the more sci-fi inclined among us - that includes me - have been thinking about the issues of ethics and robot fighters, if they were made to include independent decisionmaking in at least some circumstances.

There have been some discussions in the academy, and some references to those in the press. The most easily accessible is this short piece in the Economist, "Robot Wars," June 7, 2007, here:

But whereas UAVs and their ground-based equivalents, such as the machinegun-toting Sword robots, are usually controlled by distant human operators, the Pentagon would like to give these robots increasing amounts of autonomy, including the ability to decide when to use lethal force.
To achieve this, Ronald Arkin of the Georgia Institute of Technology, in Atlanta, is developing a set of rules of engagement for battlefield robots to ensure that their use of lethal force follows the rules of ethics. In other words, he is trying to create an artificial conscience. Dr Arkin believes that there is another reason for putting robots into battle, which is that they have the potential to act more humanely than people. Stress does not affect a robot's judgment in the way it affects a soldier's.

His approach is to create what he calls a “multidimensional mathematical decision-space of possible behaviour actions”. Based on inputs ranging from radar data and current position to mission status and intelligence feeds, the system would divide the set of all possible actions into those that are ethical and those that are not. If, for example, the drone from which the fatal attack on Atef was launched had sensed that his car was overtaking a school bus, it might then have held fire.

There are comparisons to be drawn between Dr Arkin's work and the famous Three Laws of Robotics drawn up in the 1950s by Isaac Asimov, a science-fiction writer, to govern robot behaviour. But whereas Asimov's laws were intended to prevent robots from harming people in any circumstances, Dr Arkin's are supposed to ensure only that they are not unethically killed.

I have been working on preliminary notes for an essay on this topic, but it is all very preliminary. The most striking part of the project is that I do not see that the attempt to translate ethical decisionmaking into machine terms involves genuinely novel questions of ethics as such. On the contrary, what we seek to do is not to establish novel ethical principles, but rather to create, or re-create, hypothetically ideal or perfect ethical decisionmaking and conduct as we would imagine it for a hypothetically ideal or perfect human soldier but do so within a machine, a robot. The problems are in translation, not the creation of new problems or new solutions. In that sense, one could say that however interesting or important a task of ethical translation, it poses no new tasks in fundamental ethical theory.

And yet, accepting that, there nonetheless remains an area of grave difficulty - not because it represents a new problem of ethical decisionmaking different from humans, but because we do not have an adequately theorized approach to dealing with it. I refer to the question of proportionality jus in bello - the balancing of military advantage and damage to noncombatants that is (one of, if not) the core judgment of military ethics and indeed the laws of war. I can say with a fair amount of authority, having been working on this problem very quietly in my study for the last couple of years, that we have no method of weighing these two that is very defensible as a matter of ethical theory. It may be that the very idea of a "theory" to explain the weighing of what might well be understood as incommensurables is itself the problem, and yet in practice we do it and accept that we must do it. The problem, in other words, is not simply how one comes up with a theoretically defensible moral calculus for partly subjective judgments about how to weigh things that have enough similar properties to count as weighing oranges against oranges. That would be a difficult enough calculus to adapt to a machine but at least it would be about weighing similar things.

The much more difficult problem occurs when the things being weighed are, arguably, apples and oranges - both values, in the Isaiah Berlin plurality-of-values sense, but about very different things that seemingly cannot be weighed against each other, even though, as with many things of value in liberal theory, we must. One might think of Berlin's plurality of values as both a glory of liberalism and the tragedy of liberalism. Arguably, such incommensurability is what takes place in attempting to make moral judgments of proportionality jus in bello. Military advantage is a shorthand for describing not merely winning in a narrow military sense, but instead the values for which winning is morally, and not just prudentially, important - the moral value of a political community, its survival and interior values, stability in the external and internal political order, the assertion of moral values such as counter-genocide, etc. Damage to civilians, on the other hand, while referring in part to more remote and abstract values such as political community, is much more about immediate death and destruction. Although we immediately realize, in cases where the disproportion is great enough, when one or the other trumps, it is not very easy to elaborate a set of decisional rules about how to value these against each other. We can, to be sure, develop a certain practice, in a Witttgensteinean sense, or for that matter, a common law lawyer's precendential sense - but that is not really the same as a set of decision rules.

The point about robot soldiers is that this problem reduplicates itself when trying to reproduce a moral calculus at the machine level. It presents a problem, of course - but exactly, in principle at least, the same problem that we as humans have in conceptualizing the process of weighing and decision. But it also presents, perhaps, an opportunity - a kind of thought experiment, sci-fi made real, opportunity to think about how one would seek to operationalize, to make explicit, make external, what are otherwise highly intuitive and internal moral evaluations. And it is in this that I find the ethical issue of robot soldiers particularly interesting.

(Notes from a slowly developing draft paper, "Robot Soldiers and the Ethics of Proportionality Jus in Bello." Forthcoming ... someday.)


Anonymous said...

KA, you may be interested in a survey I just posted on robots. I'd appreciate your thoughts. Click here for a quick survey on robots. It is by no means comprehensive, but it should be informative as it queries a discrete area of their use. Thank you in advance.

Sildenafil said...

I am one of the believers that technology must be used for good of humanity and not for killing our "enemy." That is so wrong and so many levels that I cant discuss them right now