Friday, August 03, 2007

Armed robots deployed in Iraq

(Welcome, Instapunditeers, and thanks Glenn for the link!)

I blogged earlier, here, about important efforts to conceptualize the ethics of robot soldiers. For the first time, armed recon robots have been deployed in a warzone - Iraq. HT thanks Instapundit; from Danger Room/Wired Blogs, here.

This is not really about the topic of my ethics of robot soldiers post. That post was about machines with the capability to act independently, independent of human control, and it pointed to very important discussions trying to anticipate how and what the ethical decision making of such independent robots should be. The machines being deployed now to Iraq are not that, and are a long, long way from that - the Iraq deployed machines are a new version of the already widely deployed SWORDS reconnaisance robot vehicle - new because the new machines have a weapon, a machine gun, added to it. They are remote controlled in real time by humans, not machines making programmed decisions about weapons use. (Also, if you are thinking of robots in the I, Robot mode - well, these are more like mobile sleds with a machine gun on top - they don't anthropomorphize.) The ethical and legal questions raised are not about independent machine decisionmaking and action, but instead about the use of a human controlled and operated but remote platform. Still, this is a step closer to what seems to me not just a natural, but an inevitable step forward in warfare for the world's most highly technological army.

Robots as a response to asymmetric, law of war violating warfare:

It is important to understand that the inevitable move toward robots on the battlefield is not merely driven, as in past times, past conflicts and wars, by material considerations of conservation of personnel, force to space ratios, and so on. It is driven as much or more today by moral, legal, and ideological considerations - part of an effort to limit the exposure of one's soldiers when dealing with enemies who will not follow the laws of war with respect to our soldiers. Part of that is obviously the attempt to not get your soldiers killed - but another important of it is to avoid having your soldiers captured by an enemy that does not pay attention to the laws of war except when, by loudly appealing to it, it can benefit from it.

The US, for good moral reasons, has given up the possibility of reprisals against civilians or other people hors de combat, such as captured enemy fighters. It has also shown itself unwilling, for not such good reasons, however, to enforce certain important remaining laws of war with regards to abuses by the enemy (such as the US refusing, in its internal rules of engagement, to fire on a mosque being used as an enemy emplacement, despite being allowed to do so under the laws of war). The US therefore finds that it has few or no behavioral levers with respect to the behavior of an enemy fighting using illegal methods. In such a case, one response is the attempt to compensate through technology - by limiting the exposure of one's soldiers in particular to death, injury, or capture and replacing them with machines.

Will robot soldiers eventually lead to a more "policing" attitude on the battlefield? Might roboticized war be a factor leading, perhaps inadvertently, to fewer decisive engagements and more protracted warfare?

One question we might have is what happens over time if fewer American soldiers were to appear at all on asymmetric battlefields, and when they did and fell into hostile hands, we gradually came to assume, on the basis of experience, that they would be held hostage under terms hardly meeting the Geneva Conventions or else beheaded on internet video. It is unlikely that we would respond by war without quarter of our own. On the contrary, part of our technological drive to create and deploy remote fighting machines is in order to get away from having to enforce a barbarous reciprocity that has always been thought otherwise necessary (the ICRC and HRW and the ICC and all the rest of the modern day "heralds" of war notwithstanding) in order to deter such actions by the other side and so ensure adherence to the laws of war.

We might conceivably move, in such circumstances, to treat those we captured more as criminal detainees than as something closer to POWs - and to reconceptualize, over the long term, the general categories of detainees in asymmetric. We would after all not incline to treat them like POWs because we would have already long since determined that their behavior was that of an unprivileged belligerent. The category of actual legal POW might even conceivably wither (away?) from disuse. We would assume our people would be abused and/or killed, or else held as hostages or for ransom - much as Israel's soldiers held by Hizbollah, for example. It wouldn't make sense to us to treat unprivileged belligerents as POWs, especially given that our people - who would indeed be entitled to such treatment - would not be so treated. I would guess that we would evolve to treat them as some form of quasi-criminal detainee - I say 'quasi' because we would not be be able typically to prove criminality except on the basis of participation in an armed enterprise that as an enterprise systematically violated the laws of war, and often not on an individual basis. And 'quasi' also because it would likely have important elements of administrative preventive detention. Of course, we face exactly such issues now, but we have not really resolved them; the widespread deployment of armed robots on the battlefield, however, might constitute one pressure in that direction.

Curiously, however, it wouldn't surprise me, on the current evolution of things, if "battle" turned gradually into some form of particularly violent and contested attempt at "arrest" after a demand for surrender. Warmaking might evolve, at least in the asymmetric urban setting, to battle as a form of "policing." As soldiers were less physically present on the actual battlefield, and if you had armed machines dominating the battlefield, manned remotely, at least on one side, might there be greater pressure on your military to call for the fighters on the other side to surrender, for example - issuing a call to surrender, rather than simply attacking or undertaking ambush or surprise? Whether that would facilitate winning a conflict, as opposed to merely managing it over the long term is not clear. It might inadvertently create conditions for systematically less decisive engagements - tactical engagements with possibly less collateral damage, but also no victory - which is, of course, the definition of victory for guerrillas in a guerrilla struggle, never win, but also never lose and finally just outlast the enemy. Whether we would care, if such long-term "managed," never-decisive warfare cost us in treasure, but not especially in blood, is also not clear.

Can robot technology overcome behavioral shifts toward illegal warfare by irregular forces?

The development of remote and robot technologies is driven by a parallel consideration that also arises from moral, legal, and ideological consideration. It is the attempt to create machines that will follow determinate legal rules of engagement, particularly with respect to the combatant-noncombatant distinction - in consideration of an enemy, however, that deliberately violates that distinction in its own combat operations. Again, the effort is to find a way to overcome the inability through our battlefield behavior (such as the reprisals we deliberately and properly don't take) to affect deliberately planned, illegal enemy behavior - through a technological fix.

The move to robots is all but inevitable and, in fact, particularly but not just under these circumstances, desirable. I have my doubts, however, that any technological fix can permanently compensate for behavior on the other side. If the nature of arms races is competitive - either a "counter" or a "defensive" move to respond to changes in the conduct of war - then we are in a peculiar historical moment in which one side attempts to respond with equipment changes to changes in behavior on the other side. Is it possible for technological ingenuity to beat out determined and evolving bad behavior? I don't know.

Legal liability and robots on the battlefield:

Those deploying armed robots to Iraq for use in the field, remotely controlled, had probably better be prepared for a much greater willingness on the part of the outside monitors, the human rights organizations, outside critics, etc., to charge illegality, criminal behavior, war crimes, violations of the law of war, etc., in any collateral damage created by these weapons than currently exists - with charges and accusations against operators as well as commanders. And against the companies that design and build and sell such weapons.

Why more than in the case of soldiers present on the battlefield? Well, it doesn't necessarily make much sense - the rules of engagement, after all, are presumably exactly the same - but I would bet with pretty high confidence that the deep and not necessarily articulated premise will be that you are more liable for damage caused if you caused it remotely and were not yourself at risk, not being present on the battlefield, operating the robot remotely.

The idea that you yourself are in some fashion at risk - even if not very much, as in the case of a standoff aircraft or tank or what have you - on the battlefield, hence giving some compensatory justification to your collateral damage, makes a difference, or anyway will likely make a difference, I would bet, in how these weapons are seen by outside critics. It will seem weird to the military - it will seem to it as very close to claiming that remote operators have an "unfair" advantage and hence are entitled to no otherwise legal collateral damage - and it will not, to the military, seem any different from any other standoff platform such as aircraft or remote artillery. Why should it be?

But I would be willing to bet that it will seem quite different to outside monitors and critics. The two core criticisms will be: (a) you are not putting yourself at risk and hence are not entitled to collateral damage because, notwithstanding that the criterion of collateral damage is "military necessity," not "did I risk myself?" it will somehow seem "unfair" - despite the fact that you are battling an enemy for whom asymmetric warfare via violations of the laws of war is de rigeuer.

And (b), the fact that you risk only a robot but risk causing collateral risk in human life means that you should not do anything that risks collateral damage at all. Civilians and even civilian objects, in the lingo of Protocol I, trump any kind of claimed military necessity. This is especially so, it will likely be said, under the ICRC's interpretation of the language of Protocol I referring very narrowly to "concrete military advantage" in the immediate circumstances as the measure of military necessity. That the US has never accepted Protocol I as a treaty and has never accepted that particular interpretation of the customary law rule regarding military necessity - and that many other countries offered reservations and interpretations on that very point when they did join Protocol I is not likely to be seen by the critics as of any account.

If your definition is military advantage is sufficiently narrow, in other words, then no collateral damage is justifiable if all you risk is some equipment, not lives, on the battlefield - if your definition of military advantage is so narrow and immediate that it cannot include the necessity of winning this battle, or any particular battle, as part of a larger plan to win a war.

As I say, this will possibly seem puzzling and quite wrong to the military itself, which operates all kinds of remote platforms for launching weapons - and as armies have done, at least since the advent of the long range bow, the catapult, and artillery. But I would urge it to prepare for precisely such criticisms. I would guess this is how the public argument will go, and it might even culminate in someone or some organization calling for indictments against US soldiers for civilian deaths resulting from the use of remotely controlled robots in combat. Or civil law suits via the Alien Tort Statute against the companies creating this equipment.

Yet this would be disastrous if it led to the curtailment of these weapons, their development and deplyment - disastrous from the standpoint of the long term integrity of the laws of war in a period in which asymmetric warfare is tending to undermine their very foundations, because reciprocity has been largely lost - and disastrous to the effort to find ways through technology of combating an enemy that does not fight by the rules. Unfortunately, that has never been a concern of those who propose to make the rules of war, but do not have anything at stake in actually having to fight using them.

(Note on the first two comments. I emphatically do not think that the JAG and those formulating the US position on the laws of war would take the view that I have here attributed as being likely to come from outside critics in the human rights or perhaps academic communities. Or from countries that, not having any pressing wars to fight, are overly willing to opine on the content of laws in which they have no stake in the outcome. On the contrary, I think that the JAG and the US military laws of war lawyers would see this more or less as I suggest above: that these armed battlefield robots are remote platforms like any other, and that in any case military necessity is, at the end of the day, about winning wars. Military necessity does not justify anything and everything, of course, and it rules out many, many things; but it does not mean that a military has any obligation to risk itself or its personnel as a condition of being able to risk otherwise legal collateral damage. But I would be interested in comments from JAG, from current or past serving laws of war lawyers, and others interested in commenting.)

(Update, 9 August 2007, check out this link HT Instapundit from Popular Mechanics. Here.)


Anonymous said...

After reading this, is it any wonder why grunts hate JAGs and pretty much any other lawyer we have to come into contact with?

Dave Hardy said...

I should have gone JAG. I'd sit in a nice room waiting for calls, and when they came in, no matter what they reported, I'd say "as your attorney, I advise you to (kill everyone / blow it up)," then hang up. I'd be very popular with the troops and never have a stressful day.

Anonymous said...

This BS could only have been written by a lawyer

Anonymous said...

Very interesting, but I have a headache now. I'm sure that is just a coincidence.

SFC B said...

I find myself surprised that someone hasn't already filed a suit like this. We've been using armed UAVs in Iraq and Afghanistan for a little while now, and I don't see how they're any different from a ROV being driven down the streets.

Roy in Calif said...

If you had been referring to autonomous armed robots, they could be treated as mobile mines (which, of course) would still be subject to censure by most of the rest of the world.

One forms the opinion that regardless of what the U.S. does (except, perhaps, dying) it will be subject to criticism and obstructionism. Needless to say, if the U.S. did die, it would be blamed for the collapse of the world economy, etc.

clazy said...

sfc b beat me to it. These are not the first armed robots, only the first to operate on land. Your analysis would be the same for armed UAVs.

You do briefly acknowledge the analogy with other standoff platforms, after suggesting that some critics would imagine some sort of differentiation on the basis of what you call two "core criticisms", but I don't see how those core criticisms couldn't be applied to UAVs, cruise missiles, long-range artillery.... Could you explain?

Anonymous said...

I believe proximately, our boys operating the killing devices will see warfare as little different from Halo or some other video game. Thus, you'll see some rather aggressive tactics used by those who literally don't have any skin in the game. Most likely, a company commander will figure out where the toughest nuts to crack are, and where they want a breakout to start, and send the machines into the riskiest parts of the battlespace.

At least it will until the other side responds with lawfare and makes all the operators too timid to shoot at anyone who isn't wearing a neon sign saying, "I'm the enemy."

I think that just as tanks needed infantry to be effective, so too these devices will not deploy a war zone of our troops.

Personally, I'd make sure the specs for the device are set up such that it's impossible for any lawyers to figure out who the operators are when the enemy claims that civilians were killed in an attack. We're seeing in Afghanistan that the closer we get to the Taliban & Al Qaeda leadership, the louder they scream about civilian casualties. That'll only increase.

Anonymous said...

Sci-fi writer Joe Haldeman dealt with just this issue in his novel Forever Peace. Remotely controlled "soldierboys" fought lightly armed humans in an asymmetrical war. When a soldierboy was "killed," it caused pain but not death to its operator. I think you've analyzed the situation accurately, and I agree that outside critics will translate the "unfairness" into a requirement that the robots make no mistakes.

Anonymous said...

I can't help but believe that this kind mealy mouthed approach to the "laws of war" does more to prolong the inevitable agony and human suffering on the battlefield by attenuating the likelihood of actual victory. Instead of going in, brutally killing the enemy and degrading them to the point of ineffectiveness, and thereby ending the war, the modern army is forced to tiptoe in gingerly, develop elaborate plans to ensure the safety and happiness of all groups, respond constantly to the "human rights" (what an Orwellian joke that name is) lobby for the enemy, and engage primarily with the minutia of legalisms. Over time the war approaches a stalemate, the killing goes on, and the lawyers pat themselves on their morally worthless backs for 'humanizing' the battlefield.

Anonymous said...

"I, Robot"? No, think Keith Laumer's Bolo.

Anonymous said...

The dominant theme seems to be increasing the safety of combatants by removing them from contact while still presenting force, which is what led to the development of the spear.

Nomenklatura said...

"The two core criticisms will be:..."

True enough, in the terms of your post, but the true core criticism will be that we are not losing.

We can deduce this from the fact that, as a moment's reflection will confirm, any non-losing strategy would promptly elicit the manufacture of a broadly similar set of criticisms.

Only a surrender and descent into utter Hobbesian chaos could ever satisfy the 'human rights' zealots (who wouldn't have a clue what to do with such a victory, were they ever given it).

Chip Ahoy said...

It has also shown itself unwilling, for not such good reasons, however, to enforce certain important remaining laws of war with regards to abuses by the enemy (such as the US refusing, in its internal rules of engagement, to fire on a mosque being used as an enemy emplacement, despite being allowed to do so under the laws of war).

This doesn't change the lawyerly points you're making but this video shows US military blowing up a mosque used to store weapons.

Anonymous said...

I fail to see why from a legal perspective, a rifle being controlled remotely is any different than a rifle in hand.

The only difference is that a screen and communications link is inserted into the loop.

Anonymous said...

First: So-called "Laws of War" were abandoned by the US when the Geneva Conventions for combatants and concept of "Enemy Combatants" was invented out of thin air. Appealing to them now is disingenuous

Second: Maintaining "Symmetric" warfare is solely for the benefit of those wanting to maintain the status quo. The American and French revolutions could never have happened if everyone scrupulously assured symmetric warfare was enforced. Why aren't you wearing your "Red Coats" in Iraq.

Third: Asymmetric warfare is precisely the only response to overwhelming symmetric capability. If your needs and grievances are met by the side holding this symmetric power, all is well. If that side takes liberties and unfair advantage of its power, then it is both a basic human right and predictable human behavior to resort to the only means available to fight non-suicidally - asymmetric warfare.

Fourth: The fatal flaw with having autonomous machines do your dirty work is that you disconnect morality and ethics decisions about the act from the execution of act itself. Only humans can make moral or ethical decisions!! Machines can not. Letting machines autonomously decide to kill is dangerous and a moral cop-out. "It wasn't I that killed that innocent family, it was the machine - go put the machine on trial!" Machines are neither good or bad - they simply are. To ascribe morality to an inanimate object - be it killer robot or atomic bomb - is truly the worst form of cowardice.

BTW I've been directly involved in the creation of most of the smart weapons fielded by the US over the last 25 years. Yes, some already cross this line. That doesn't make it morally right.

Fifth: There is mathematics to support the following. Direct force can never eliminate an enemy operating asymmetrically short of committing systematic and total genocide. As a strategy this is often called a logical extension of the "El Salvador" strategy.

There will always be a remnant that keeps the flame alive. Even a rudimentary knowledge of Middle Eastern history show how this can be true. It is the underlying misuse of symmetric power that is the root of the problem.

The major risk of killer robots is the possibility that their human masters can achieve such an aim with frightening efficiency and with the potential to claim the argument that they hold no moral responsibility for committing genocide.

Sixth: Even if you could somehow create a framework to assure moral use of killer robots, Bayes theory prevents you from ever being able to feed the framework with intelligence accurate enough to get it right in practice. This is especially true for asymmetric warfare because the nature of it guarantees the side practicing conventional warfare will mis-target the enemy and innocents most of the time. This is why asymmetric warfare has the ability to level the playing field against a symmetrically strong opponent.

Apply Bayes theory to any current intelligence system fielded in Iraq and you can prove the Lancet casualty figures are as good as a mathematical certainty to be the most correct estimate of civilian casualties.

Anonymous said...

"'Enemy Combatants' was invented out of thin air. Appealing to them now is disingenuous"

What, exactly are you talking about?

Laika's Last Woof said...

Human rights groups complain about everything, all the time. Nobody takes them seriously anymore.
If the military wants to use robots as remote weapon platforms they'll do it, the Human Rights community will complain, and the American people will stifle another yawn, just like we always do.
Have a little faith.

clazy said...

Anonymous 1:21, "Overwhelming symmetric capability"? You would do better to simply say overwhelming power. Don't you mean "conventional"? You seem to be tangled up in terminology. One side is strong, the other is weak: that is an initial asymmetry. The weak side looks for a way to become strong; it does not have the resources of the dominant power so it has to be imaginative. They conduct warfare differently, so it is asymmetrical. But it's no big deal. They're only looking for their own advantage, and they would whether they were weak or not. Anyone who wants to win tries to create asymmetry, hence the atom bomb. "Maintaining symmetric warfare?" Who's doing that? Not Darpa. I'm finding it pretty difficult to believe you have anything meaningful to do with the development of "most of the smart weapons fielded by the US over the last 25 years," as you claim.

By the way, I have no idea what you're trying to say in your last paragraph.






交友,AIO交友愛情館,AIO,成人交友,愛情公寓,做愛影片,做愛,性愛,微風成人區,微風成人,嘟嘟成人網,成人影片,成人,成人貼圖,18成人,成人圖片區,成人圖片,成人影城,成人小說,成人文章,成人網站,成人論壇,情色貼圖,色情貼圖,色情A片,A片,色情小說,情色小說,情色文學,寄情築園小遊戲, 情色A片,色情影片,AV女優,AV,A漫,免費A片,A片下載cy

清朝美女 said...

(法新社倫敦四日電) 英國情色大亨芮孟的公司昨天說,芮成人影片孟日前av日本av世,享壽a片八十二歲;這位AV片身價上億的房地產開發商,曾經在倫a片敦推出第一場脫衣舞表演av






元美女 said...

(法新社a倫敦二B十WE四日電) 「情色二零零七」情趣產品大產自二十三日起在倫敦的成人網站肯辛頓奧林匹亞展覽館舉行,倫敦人擺脫對成人網站性的保守態度踴躍參觀,許多穿皮衣與塑膠緊身衣的好色之徒擠進這項世界規a片模最大的成人生活A片下載展,估計三天展期可吸引八萬多好奇民眾參觀。







原子小金剛 said...

(法新社倫敦四日電) 英國情色大亨芮孟的公司昨天說,芮孟日前去世,享壽八十二歲;這位身價上億的房地產開發商,曾經成人網站在倫敦推出第一場脫衣舞表演。





Anonymous said...


dfadf said...

Microsoft Office
Office 2010
Microsoft Office 2010
Office 2010 key
Office 2010 download
Office 2010 Professional
Microsoft outlook
Outlook 2010
Windows 7
Microsoft outlook 2010