Elevation

The Website of Team 14.


Project maintained by ECE3400-Team14 Hosted on GitHub Pages — Theme by mattgraham

Ethics:

Question: Is “a ban on offensive autonomous weapons beyond meaningful human control” going to work?

Open letter calling for a ban on autonomous weapons

Key quotes from the letter:

Our Response

While we believe that a ban on autonomous weapons is an ethical goal worth striving for, there are plenty of reasons why a universal ban will not work. While many of the concerns with autonomous weapons in the letter are valid, many of the the same critiques against them can be made about the current technology in use by militaries around the world, and such concerns did not stop those technologies from becoming ingrained in modern warfare.

On a global scale, it will be difficult to convince nations to throw out the possibility of developing autonomous weapons. With defence spending being significant parts of some of the largest economies (in the US, $590 billion or 3.1% GDP in 2017), such economies will have and interest in gaining the economic benefits of an AI warfare revolution. Limiting the development of autonomous war technologies might not only be seen as a lost opportunity for economic growth, but may also be viewed as a handicap in global military conflicts where enemies are employing such autonomous weapons. Just like during the cold war with nuclear weapons, nations will likely compete in an arms race to develop autonomous weapons as a means of defense. As long as these superpowers are in competition with each other over military dominance, there will be a large effort to support these technologies.

Yet, even if a ban on autonomous weapons is agreed on, there is a question of whether such a ban is enforceable. As research into AI develops, it may happen that that autonomous weapons, whether developed legally or illegally, surpass the capabilities of non-autonomous weapons, similar to how guns replaced more primitive weapons like swords and bows over time. In this case, any means of countering autonomous weapons with non-autonomous weapons will become ineffective, and the enforcers of the ban will be left no choice but to develop their own autonomous weapons to counteract the “illegal” ones, thereby nullifying the intent of the ban in the first place. In addition, those who would obey such a ban are probably the most responsible users of the system (insofar as it can be responsibly used, which is up for debate). Picking sides here might be as simple as realizing that if autonomous weapons cannot be entirely eliminated, then giving the best versions to peaceful nations might be the only way to keep the balance.

While the article claims that AI researchers are concerned that applying AI technology to war will “tarnish there field,” it should be acknowledged that research towards warfare has sometimes resulted in benefits to non-warfare industries. Military backing led to the developments of early computers, GPS satellites, the internet, etc. which have all become quintessential parts of modern technological societies. While many researchers in AI want to avoid being associated with war, as the letter describes, it is likely that the increase in demand and funding for AI research made by weapons manufacturers will be a more influential force for driving AI development than the ethical decisions behind banning these weapons.

In short, there are many obstacles in the way to making a ban on autonomous weapons feasible. While a full ban might not be possible, restrictions on the use of autonomous weapons are still possible, and an increasing interest in autonomous weapons will likely bolster research in AI for non-war purposes. Since a ban won’t work, we have to move carefully forward into this technology, unless some revelation about its lack of ability can cause a more or less unanimous agreement of non-proliferation.