Terminator 3

Skynet is coming

Autonomous weapon systems are increasingly being used in modern warfare, notably in the recent Kosovo campaign. The improvement in electronics is resulting in the development of weapon systems with ever greater computing power. Current systems developed for surveillance only are now being replaced with combat capable vehicles where the decision to attack resides in an intelligent computer rather than a human operator. It is envisaged that this will increasingly become the norm in the digitised battlespace of the future because the human operator will simply be too slow to take effective decisions except at the highest levels.

However, the use of such technology raises real concerns about whether the decision to take human life should reside with an artificially intelligent machine. Concerns range from the implications for proportionality and discrimination to the fundamental issue of whether it is morally right to permit computers to take such decisions.

This paper reviews the potential use of autonomous weapons in future warfare and examines the ethical issues raised using utilitarian and rights-based philosophical arguments. From these arguments we judge that if these weapons are permitted in an unbridled fashion then they are morally less acceptable than conventional technology. Moreover, at a deeper level we argue that their use in life and death situations gives rise to profound objections that are emotional rather than logical in nature.