(Welcome, Instapunditeers, and thanks Glenn for the link!)
I blogged earlier, here, about important efforts to conceptualize the ethics of robot soldiers. For the first time, armed recon robots have been deployed in a warzone - Iraq. HT thanks Instapundit; from Danger Room/Wired Blogs, here.
This is not really about the topic of my ethics of robot soldiers post. That post was about machines with the capability to act independently, independent of human control, and it pointed to very important discussions trying to anticipate how and what the ethical decision making of such independent robots should be. The machines being deployed now to Iraq are not that, and are a long, long way from that - the Iraq deployed machines are a new version of the already widely deployed SWORDS reconnaisance robot vehicle - new because the new machines have a weapon, a machine gun, added to it. They are remote controlled in real time by humans, not machines making programmed decisions about weapons use. (Also, if you are thinking of robots in the I, Robot mode - well, these are more like mobile sleds with a machine gun on top - they don't anthropomorphize.) The ethical and legal questions raised are not about independent machine decisionmaking and action, but instead about the use of a human controlled and operated but remote platform. Still, this is a step closer to what seems to me not just a natural, but an inevitable step forward in warfare for the world's most highly technological army.
Robots as a response to asymmetric, law of war violating warfare:
It is important to understand that the inevitable move toward robots on the battlefield is not merely driven, as in past times, past conflicts and wars, by material considerations of conservation of personnel, force to space ratios, and so on. It is driven as much or more today by moral, legal, and ideological considerations - part of an effort to limit the exposure of one's soldiers when dealing with enemies who will not follow the laws of war with respect to our soldiers. Part of that is obviously the attempt to not get your soldiers killed - but another important of it is to avoid having your soldiers captured by an enemy that does not pay attention to the laws of war except when, by loudly appealing to it, it can benefit from it.
The US, for good moral reasons, has given up the possibility of reprisals against civilians or other people hors de combat, such as captured enemy fighters. It has also shown itself unwilling, for not such good reasons, however, to enforce certain important remaining laws of war with regards to abuses by the enemy (such as the US refusing, in its internal rules of engagement, to fire on a mosque being used as an enemy emplacement, despite being allowed to do so under the laws of war). The US therefore finds that it has few or no behavioral levers with respect to the behavior of an enemy fighting using illegal methods. In such a case, one response is the attempt to compensate through technology - by limiting the exposure of one's soldiers in particular to death, injury, or capture and replacing them with machines.
Will robot soldiers eventually lead to a more "policing" attitude on the battlefield? Might roboticized war be a factor leading, perhaps inadvertently, to fewer decisive engagements and more protracted warfare?
One question we might have is what happens over time if fewer American soldiers were to appear at all on asymmetric battlefields, and when they did and fell into hostile hands, we gradually came to assume, on the basis of experience, that they would be held hostage under terms hardly meeting the Geneva Conventions or else beheaded on internet video. It is unlikely that we would respond by war without quarter of our own. On the contrary, part of our technological drive to create and deploy remote fighting machines is in order to get away from having to enforce a barbarous reciprocity that has always been thought otherwise necessary (the ICRC and HRW and the ICC and all the rest of the modern day "heralds" of war notwithstanding) in order to deter such actions by the other side and so ensure adherence to the laws of war.
We might conceivably move, in such circumstances, to treat those we captured more as criminal detainees than as something closer to POWs - and to reconceptualize, over the long term, the general categories of detainees in asymmetric. We would after all not incline to treat them like POWs because we would have already long since determined that their behavior was that of an unprivileged belligerent. The category of actual legal POW might even conceivably wither (away?) from disuse. We would assume our people would be abused and/or killed, or else held as hostages or for ransom - much as Israel's soldiers held by Hizbollah, for example. It wouldn't make sense to us to treat unprivileged belligerents as POWs, especially given that our people - who would indeed be entitled to such treatment - would not be so treated. I would guess that we would evolve to treat them as some form of quasi-criminal detainee - I say 'quasi' because we would not be be able typically to prove criminality except on the basis of participation in an armed enterprise that as an enterprise systematically violated the laws of war, and often not on an individual basis. And 'quasi' also because it would likely have important elements of administrative preventive detention. Of course, we face exactly such issues now, but we have not really resolved them; the widespread deployment of armed robots on the battlefield, however, might constitute one pressure in that direction.
Curiously, however, it wouldn't surprise me, on the current evolution of things, if "battle" turned gradually into some form of particularly violent and contested attempt at "arrest" after a demand for surrender. Warmaking might evolve, at least in the asymmetric urban setting, to battle as a form of "policing." As soldiers were less physically present on the actual battlefield, and if you had armed machines dominating the battlefield, manned remotely, at least on one side, might there be greater pressure on your military to call for the fighters on the other side to surrender, for example - issuing a call to surrender, rather than simply attacking or undertaking ambush or surprise? Whether that would facilitate winning a conflict, as opposed to merely managing it over the long term is not clear. It might inadvertently create conditions for systematically less decisive engagements - tactical engagements with possibly less collateral damage, but also no victory - which is, of course, the definition of victory for guerrillas in a guerrilla struggle, never win, but also never lose and finally just outlast the enemy. Whether we would care, if such long-term "managed," never-decisive warfare cost us in treasure, but not especially in blood, is also not clear.
Can robot technology overcome behavioral shifts toward illegal warfare by irregular forces?
The development of remote and robot technologies is driven by a parallel consideration that also arises from moral, legal, and ideological consideration. It is the attempt to create machines that will follow determinate legal rules of engagement, particularly with respect to the combatant-noncombatant distinction - in consideration of an enemy, however, that deliberately violates that distinction in its own combat operations. Again, the effort is to find a way to overcome the inability through our battlefield behavior (such as the reprisals we deliberately and properly don't take) to affect deliberately planned, illegal enemy behavior - through a technological fix.
The move to robots is all but inevitable and, in fact, particularly but not just under these circumstances, desirable. I have my doubts, however, that any technological fix can permanently compensate for behavior on the other side. If the nature of arms races is competitive - either a "counter" or a "defensive" move to respond to changes in the conduct of war - then we are in a peculiar historical moment in which one side attempts to respond with equipment changes to changes in behavior on the other side. Is it possible for technological ingenuity to beat out determined and evolving bad behavior? I don't know.
Legal liability and robots on the battlefield:
Those deploying armed robots to Iraq for use in the field, remotely controlled, had probably better be prepared for a much greater willingness on the part of the outside monitors, the human rights organizations, outside critics, etc., to charge illegality, criminal behavior, war crimes, violations of the law of war, etc., in any collateral damage created by these weapons than currently exists - with charges and accusations against operators as well as commanders. And against the companies that design and build and sell such weapons.
Why more than in the case of soldiers present on the battlefield? Well, it doesn't necessarily make much sense - the rules of engagement, after all, are presumably exactly the same - but I would bet with pretty high confidence that the deep and not necessarily articulated premise will be that you are more liable for damage caused if you caused it remotely and were not yourself at risk, not being present on the battlefield, operating the robot remotely.
The idea that you yourself are in some fashion at risk - even if not very much, as in the case of a standoff aircraft or tank or what have you - on the battlefield, hence giving some compensatory justification to your collateral damage, makes a difference, or anyway will likely make a difference, I would bet, in how these weapons are seen by outside critics. It will seem weird to the military - it will seem to it as very close to claiming that remote operators have an "unfair" advantage and hence are entitled to no otherwise legal collateral damage - and it will not, to the military, seem any different from any other standoff platform such as aircraft or remote artillery. Why should it be?
But I would be willing to bet that it will seem quite different to outside monitors and critics. The two core criticisms will be: (a) you are not putting yourself at risk and hence are not entitled to collateral damage because, notwithstanding that the criterion of collateral damage is "military necessity," not "did I risk myself?" it will somehow seem "unfair" - despite the fact that you are battling an enemy for whom asymmetric warfare via violations of the laws of war is de rigeuer.
And (b), the fact that you risk only a robot but risk causing collateral risk in human life means that you should not do anything that risks collateral damage at all. Civilians and even civilian objects, in the lingo of Protocol I, trump any kind of claimed military necessity. This is especially so, it will likely be said, under the ICRC's interpretation of the language of Protocol I referring very narrowly to "concrete military advantage" in the immediate circumstances as the measure of military necessity. That the US has never accepted Protocol I as a treaty and has never accepted that particular interpretation of the customary law rule regarding military necessity - and that many other countries offered reservations and interpretations on that very point when they did join Protocol I is not likely to be seen by the critics as of any account.
If your definition is military advantage is sufficiently narrow, in other words, then no collateral damage is justifiable if all you risk is some equipment, not lives, on the battlefield - if your definition of military advantage is so narrow and immediate that it cannot include the necessity of winning this battle, or any particular battle, as part of a larger plan to win a war.
As I say, this will possibly seem puzzling and quite wrong to the military itself, which operates all kinds of remote platforms for launching weapons - and as armies have done, at least since the advent of the long range bow, the catapult, and artillery. But I would urge it to prepare for precisely such criticisms. I would guess this is how the public argument will go, and it might even culminate in someone or some organization calling for indictments against US soldiers for civilian deaths resulting from the use of remotely controlled robots in combat. Or civil law suits via the Alien Tort Statute against the companies creating this equipment.
Yet this would be disastrous if it led to the curtailment of these weapons, their development and deplyment - disastrous from the standpoint of the long term integrity of the laws of war in a period in which asymmetric warfare is tending to undermine their very foundations, because reciprocity has been largely lost - and disastrous to the effort to find ways through technology of combating an enemy that does not fight by the rules. Unfortunately, that has never been a concern of those who propose to make the rules of war, but do not have anything at stake in actually having to fight using them.
(Note on the first two comments. I emphatically do not think that the JAG and those formulating the US position on the laws of war would take the view that I have here attributed as being likely to come from outside critics in the human rights or perhaps academic communities. Or from countries that, not having any pressing wars to fight, are overly willing to opine on the content of laws in which they have no stake in the outcome. On the contrary, I think that the JAG and the US military laws of war lawyers would see this more or less as I suggest above: that these armed battlefield robots are remote platforms like any other, and that in any case military necessity is, at the end of the day, about winning wars. Military necessity does not justify anything and everything, of course, and it rules out many, many things; but it does not mean that a military has any obligation to risk itself or its personnel as a condition of being able to risk otherwise legal collateral damage. But I would be interested in comments from JAG, from current or past serving laws of war lawyers, and others interested in commenting.)
(Update, 9 August 2007, check out this link HT Instapundit from Popular Mechanics. Here.)
Friday, August 03, 2007
(Welcome, Instapunditeers, and thanks Glenn for the link!)