THE WARSAW INSTITUTE REVIEW

Date: 24 September 2018    Author: Maciej Zając

The Robotic Revolution in Military and Poland’s National Security Strategy

Experts from China, Russia, and the United States unanimously agree that both robotic technologies and artificial intelligence (AI) will dramatically change the image of today’s battlefields. The three aforementioned powers seem to be getting more and more involved in a global arms race that would result in the gradual robotization and algorithmization of military operations.

Geneva, Switzerland, November 13, 2006. Francois Rivasseau (R) from France, president of the Third Review Conference on Disarmament of the States Parties to the Convention on Certain Conventional Weapons, and Nobuaki Tanaka (L) of Japan, UN Under-Secretary-General for Disarmament Affairs. Press conference about the Cluster Munition Coalition. © Laurent Gillieron (PAP/EPA)

At the same time, both the United Nations and the European Union call for the introduction of a ban on the use of autonomous weapons (including robot soldiers) due to their alleged unethical character. Having the potential to quickly polarize, this dispute forces Poland to make a choice that needs to be confronted both quickly and consciously. The state may be likely to back the drafted ban, despite realizing that Russia may consider it non-binding. Likewise, the Polish authorities could support the similarly situated Baltic States in creating a next generation defense system that could provide Europe with prospects for strategic autonomy and make warfare much more humane.

On September 1st, during the Group of Governmental Experts to the Convention on Certain Conventional Weapons Forum held in the Swiss city of Geneva, a group of 26 states, mostly of African and Latin American countries that are less powerful in terms of both military and political capacity, applied to formally begin work on a treaty that would prohibit the research, production and use of autonomous weapons, also referred to as robot soldiers (yet some activists and industry experts would prefer to depict them as killer robots)[1]. Nonetheless, their application was vetoed by the world’s most powerful militaries, including those of the United States, Russia, Israel, South Korea, and Australia, as they all view autonomous weapons as a legal – and probably also morally superior[2] – type of armament. Yet the demand was supported by China; which led some pundits to refer to such a behavior as blatant hypocrisy as Beijing keeps developing similar systems at a rapid pace[3]. Europe’s giants, including Germany, France, and Great Britain, did not adopt any radical standpoint in the dispute, preferring to rather call for the standardization of definitions and regulations. Even though these events extinguish the hopes of introducing a global ban on the use of robotic weapons by some factions, its future in Europe remains open for debate. All parties to this dispute can, however, agree on one thing: the future of war and strategy are at stake.

But what are these weapons and what ramifications could they have in the future? The reputation of this not yet developed form of weaponry is subject to a series of misunderstandings, out-of-context speculations, and ideologically charged aversion. Therefore, the following analysis needs to first describe the status of current advances in autonomous weapons as well as the pace and goals that both engineers and strategists hope to achieve in this regard within the next two decades. Such findings will provide the necessary background for further investigation into the transformative impact of this new weaponry and its subsequent ethical, legal and diplomatic controversies. Lastly, I will discuss three strategies that could potentially be accepted by the Polish state as well as other Central European countries in light of yet another military technological revolution, followed by geopolitical shifts.

Brave New Army

The recent debut of Russian remote-controlled tanks in the battlefield could hardly be described as spectacular. In fact, they were reported to have performed poorly in Syria, which seemed to correspond with the previous problems of so-called game-changing technologies such as submarines, tanks or aircraft. The Russian Uran–9, a vehicle the size of an SUV and heavily armed with a 30 mm-caliber cannon and guided missile launchers, was a Potemkin-like creation as its essential aim was to demonstrate the technological supremacy of Mother Russia. As a result, the ground vehicle had a faulty chassis and its cannons also proved to be highly inefficient. To make matters worse, the robot tank lost connection with its operators, which drew sharp criticism from Russia’s domestic experts[4]. Yet the vast majority of the aforementioned defects can be easily eliminated by enhancing both quality and production processes. The only remaining considerable problem consists of communication disturbances that tend to occur in urban and undulating terrains, validating the need for some of the bolder concepts in the field of military robotics. Therefore, any half-measures such as remote-controlled machines are dead ends. Any combat robot should be granted real operational autonomy in order to assure its high efficiency on modern battlefields, the environments of which are often referred to as becoming increasingly dangerous.

An unmanned tank, controlled by autonomous software, would have various advantages over competitive human-controlled vehicles. A significant, if not the major part of the weight of each tank essentially consists of armor whose main task is to protect the crew – the vehicle’s most fragile element, the damage of which effectively results in the machine’s immediate shutdown. Facing such a possibility, Russia’s new generation T–14 Armata main battle tank automates a large part of all onboard systems while its three-man crew is isolated in a single, heavily armored compartment isolated from the rest of the vehicle. This way, it cannot be destroyed by the majority of the existing anti-tank weapons. By definition, all unmanned tanks are insensitive to the threats that need to be prevented by the T–14 engineers; for instance, computer systems can be located in different parts of the vehicles, enabling components, which can remain intact even after being hit multiple times, to take over all functions of the fully destroyed pieces as well as those affected by graceful degradation. The lower mass of the tank’s armor (the latter of which is no longer necessary to protect the crew), may result in increased speed and maneuverability as well as reduced vehicle size, not to mention its decreased detectability and fuel consumption. In addition, it would allow such a tank to be sent to other battlefields by troop carriers and could traverse relatively narrow roads and bridges. All these solutions would significantly cut costs and facilitate the logistics services, especially as the latter has long been considered to be the „Achilles heel” of all the world’s armies. And yet the aforementioned benefits constitute just some of the possibilities of the robot revolution in military which is considered to be the biggest breakthrough since the creation of ballistic nuclear missiles.

Thanks to advanced robotics and AI, military officers, who do not limit themselves only to improving existing solutions, may achieve unheard-of goals, both in the context of minor clashes as well as the grand strategy. First of all, it is vital to discuss its micro-scale components. One of the most prestigious and demanding positions in the military is that of a fighter pilot. Their extensive training may cost up to several million dollars while all the knowledge they have is irrevocably lost when such a person retires or dies in combat or in an accident. In contrast, a computer program that learns to operate a fighter jet, acting in a similar way to a self-driving car’s software in California and neither gets older nor dies unless it becomes a victim of an extremely broad cyber-attack. Furthermore, such a program gets better with every flight – both real and simulated (in which case the program may run hundreds of simulations within just one hour); it never tires and is always ready for the next task. Even if such an unmanned aerial vehicle is shot down, it is still able to pass all necessary data on a given incident to its respective control panel, ensuring that the software will not commit the same error twice. Therefore, all abilities to retain and accumulate experiences, to find, adapt and train solutions to every possible scenario – additionally combined with consistent quality of work in all conditions proves that AI has advantages over other technologies. In addition, the prospect of taking advantage of such algorithms in war may offer the level of excellence comparable to all flying aces combined; a sniper cumulating the aggregate skills of all human snipers. Thanks to its superhuman reaction time, AI is able to stay two steps ahead of any human[5].

Since these robots are neither limited in terms of human reflexes nor in our ability to process only a certain amount of information, nor the need to be protected or to deal with any biological limitations of the human body, it eventually results in the emergence of completely new machines with the potential of unprecedented application. The most interesting and best described of them all is known as Swarm consisting of hundreds or thousands of small vehicles capable of communicating with each other with superhuman speech and having numerous possibilities at their disposal, normally unattainable for a single individual. Its simplest forms have already been used by anti-government rebels to attack Russian military bases where they destroyed aircrafts and helicopters, all of which are worth tens of millions of dollars, with only several hundred grams of explosives and a few drones bought in a supermarket[6]. This very same tactic – if better coordinated and deployed on a large scale – could bring about disastrous consequences, both for military and industrial facilities as well as for civilian infrastucture[7]. During conventional combat operations, thousands of relatively cheap robots, whose loss would not be missed by anyone, could potentially execute truly suicidal missions, destroying both the strength and morale of the enemy and enabling a whole range of strategic maneuvers that were until now only possible either on the chess board or in computer games[8]. The ability to sacrifice thousands of robots in order to achieve a tactical advantage would, therefore, provide human commanders with never before experienced limitless possibilities.

The full autonomy of a combat robot on a higher strategic level would make it possible for the commander as well as diplomats and politicians to make promises (and threats) of both unquestionable and unyielding credibility. For instance, if the United Nations were to announce that blue-helmet robots would protect a refugee camp from any acts of aggression, machines programmed to perform these tasks would meekly obey such order; therefore, various actions including threats, negotiations, attempts at bribery and usage of hostages would be rendered completely useless. Any entity that ensures military guarantees, will be able to additionally grant their inviolability, even at the risk of changing orders that were previously issued. Naturally, nobody postulates such actions in the case of nuclear forces or other armies being capable of bringing about mass destruction over long distances – although such a vision might eventually make the dreams of some Cold War theoreticians come true. Furthermore, due to ethical considerations, such solutions should be restricted to purely defensive actions. Still, pre-programmed robots offer a new quality in terms of strategic stability, protection of civilians as well as peacekeeping and nation-building missions[9].

Moscow, Russia, May 9, 2018. Russia’s newest robotic complex Uran-9 takes part in the Victory Day military parade in the Red Square. © Sergei Ilnitsky (PAP/EPA)

Robotic Revolution – The Train That Has Already Left

It does not come as a surprise that the world’s major powers, including the United States, China, and Russia, are increasingly deploying autonomous unmanned[10] vehicles; for example, the U.S. estimates  that by 2035, as much as one-third of its army’s ground combat vehicles would not need any human-controlled support. This figure may be even higher in aviation, logistical components and underwater fleets. It should be noted that many potentially ground-breaking military technologies – even those in advanced phases, failed to be officially released, as evidenced by the U.S. Future Combat Systems, an American technological system that was supposed to change the reality of terrestrial combat. This project was canceled in 2009 after billions of dollars were spent on its research and development. However, the consensus by experts regarding the imminent arrival of the era of autonomous weapons results directly from uncontested decisions and not just primitive technological determinism.

First, the creation of such a weapon does not depend on the success of any single project or engineering solution. According to Jurgen Altmann and Frank Sauer, „autonomous weapon systems need not necessarily take the shape of a specific weapon system akin to, for instance, a drone or a missile. They also do not require a specific military-technology development path. The set of possible forms, which can be assumed by technologies that meet the definition of autonomous weapons, seems undoubtedly very large and it would be difficult to state that all of them may ultimately lead to a failure”[11].

Furthermore, the rapid growth in autonomous weapons stems from the fact that most of the component technologies in these systems are developed as ‘dual-use technologies’ by the civilian community for non-military goals and motivations. Altmann and Sauer continue: „As AI, autonomous systems and robot technologies mature and begin to pervade the civilian sphere, militaries will increasingly be able to make use of them. Weapons development will profit from the implementation or mirroring of a variety of civilian technologies (or derivatives thereof) and their adoption for military purposes, technologies which are currently either already available or on the cusp of becoming ready for series production in the private sector”[12]. The army is, therefore, able to take advantage of all fruits of technological convergence and are able to use robotic technologies to conduct a wide range of tasks, a phenomenon that, according to Gill Pratt, Program Manager at DARPA (U.S. Defense Advanced Research Projects Agency), could be referred to as a „Cambrian Explosion in Robotics”[13]. In his paper, Pratt mentioned eight drivers of such rapid advances in robotic technologies, including steady growth in the human brain’s computing power, increases in battery performance and an ever-increasing density and power of local Wi–Fi networks. A similar message was made by Arati Prabhakar, an American engineer and the former head of DARPA, as quoted in the recently announced official U.S. plans for the development of military technologies for the next 25 years[14]. In the document, the U.S. Department of Defense forecasts most of the technological progress to be made in the private sector.

Due to the huge dispersion of R&D activity in the fields of science and engineering – which is crucial for the emergence of autonomous weapons – it would be practically impossible to stop any processes of its creation, either by suspension of funding or a direct ban imposed by international law, additionally backed by sanctions or armed intervention. Funds for research are mostly provided by the civilian market; once the civilian sector finally disseminates mature robotic platforms, their adjustment to combat purposes will require neither the use of any rare and easily detectable materials – as in the case of nuclear weapons – nor the launch of long-term and costly processes that would potentially result in the establishment of clearly identifiable infrastructure (as in the case of nuclear and chemical weapons). No highly qualified specialists need to be recruited either. The difference between existing remote-controlled combat machines and their autonomous versions would be just a matter of creating supplementary software. As exemplified by issues of cyber-crime and cyber war, it is not possible to impose a set of international regulations in order to prevent any programmer, hired by a government to achieve a certain agenda, from performing his or her tasks. In addition, the prohibition of a whole range of civilian technologies that greatly contributes to the future of the global economy would appear not only extremely costly and highly unpopular but also immoral; such technologies could potentially be used to help victims of accidents, illnesses, disasters or extreme poverty. For instance, autonomous cars hold great potential in significantly decreasing the number of road fatalities.

Taking into account all the conditions described above, the idea of prohibiting the development and production of autonomous weapons by means of an international convention seems as realistic as that of prohibiting the use of computer technologies for military purposes. This is quite obvious even before geopolitical and historical factors are taken into account as evidenced by both explicit and implicit violations of similar conventions by Russia, Iran, North Korea, Syria, and other rogue states. Given this precedent, it is clear that such countries would only acquiesce to the demands of abandoning their armament programs only if threatened by armed intervention as was the case with the Syrian chemical weapons or the Iranian nuclear plans. Just as in the case of using inspection as a basic tool for disarmament negotiations, making similar threats towards nuclear powers such as China, Russia and even North Korea is simply not an option.

Under such conditions, each influential state may find it beneficial to develop their own autonomous weapons, regardless of the intentions of other countries; if the latter get involved in such research, it will aim to take a leading position in a technological race that could be easily won.

The idea of a global ban on autonomous arms depends on the assumption that both Western and allied powers, as well as various autocracies and criminal regimes, unanimously agree on sacrificing their own interests and do not seek any solutions developed by the civilian sector in both a spontaneous and inevitable manner. It was easy to predict that none of the geopolitical players would resist their actions on a similar assumption; indeed, this did not happen as evidenced by the section below. Yet, the question emerges: why would the robotic weapons have to be banned?

The myth of a robot, the myth of human

An opposing standpoint is concisely presented by Professor Bonnie Doherty, a lecturer at the Harvard University law department and activist at Human Rights Watch, one of several prominent NGOs that advocate for the total ban on the use of autonomous weapons. In her view,

„Fully autonomous weapons (…) would be unable to feel compassion, an emotion that inspires people to minimize suffering and death. The weapons would also lack the legal and ethical judgment necessary to ensure that they protect civilians in complex and unpredictable conflict situations.

In addition, as inanimate machines, these weapons could not truly understand the value of an individual life or the significance of its loss. Their algorithms would translate human lives into numerical values. By making lethal decisions based on such algorithms, they would reduce their human targets – whether civilians or soldiers – to objects, undermining their human dignity.

Fully autonomous weapons (…) would likely violate other key rules of international law. Their use would create a gap in accountability because no one could be held individually liable for the unforeseeable actions of an autonomous robot.

(…) Furthermore, the existence of killer robots would spark widespread proliferation and an arms race – dangerous developments made worse by the fact that fully autonomous weapons would be vulnerable to hacking or technological failures”[15].

Although each of the aforementioned arguments is worth being dealt with separately, I would like to first point out two of her assumptions[16]. Professor Doherty seems to think that autonomous weapons will not be effectively subject to the same rigorous process of determining compliance with international humanitarian law routinely applied to new types of combat equipment. She also seems to believe that their development will be guided by elementary norms of self-interest and combat utility. Otherwise, how is it possible to explain her belief that the army would be willing to introduce machines whose functions could be referred to as „unpredictable”[17]? Even if the above-mentioned assumption depicts military engineers and decision makers as both extremely immoral and incompetent, it simultaneously portrays the existence of „idyllic battlefields”: a place full of compassion and respect for human dignity, where war crimes are effectively and regularly penalized and commanders as well as top-level politicians are considered to be morally immaculate. Therefore, the myth of a soulless robot, which seemed to personify primitive evil in fairy tales or poor sci-fi publications, intermingles with a Rousseau-inspired vision of a humanity deprived of any faults.

US. March 13, 2004. The autonomous robotic ground vehicle of Team CajunBot before the start of the DARPA Grand Challange Field Test, from Barstow, California, to Primm, Nevada. © DARPA (PAP/EPA)

Such imagination constitutes only a rhetorical tool while experts, including Professor Doherty, are perfectly aware of the realities of modern war as well as its numerous deficiencies – which could be observed even in the case of present-day Western armies – whose ethical nature has considerably improved in the post-Vietnam period. According to the new principles, it is absolutely vital to limit the number of casualties while putting an end to the suffering and destruction; nonetheless, such strategy cannot be implemented by having troops empathize with their victims in real time. Similarly to any other institutions established to undertake essential steps in extreme situations, including the police or the healthcare system, the army seeks to put in action humanitarian principles, which is possible thanks to a system of ready-made procedures that do not allow for the expression of any short-lived emotions or for the perception of the loss of human life. Such significance is reflected in both the laws and regulations that soldiers are supposed to respect, and this observance is often achieved in a purely mechanistic manner. Military discipline aims to suppressing feelings – while replacing them with cold professionalism – for two ethical reasons: first of all, war evokes emotions that cannot be referred to as „brotherly love”; instead, as the sentiments of anger, animal fear, frustration, and hatred are much more common and natural on the battlefield. Secondly, „translating human lives into numerical values” is a necessary part of an ethical soldier’s reflection and a basic tool for implementing the principle of proportionality fundamental to international humanitarian law. Instead of using emotions and falling prey to human cognitive errors, it is better to reflect on how common moral intuitions may translate into a transparent algorithm, which would be at the same time resistant to the user’s partiality for the given moment.

Autonomous weapons share some features – perceived by their critics as unique– with traditional types of weaponry. Nowadays, the vulnerability to hacker attacks applies to all technological artifacts from a Toyota Yaris to most refrigerators. The same can be said on the issue of criminal liability and actions for compensation. They do not substantially differ from any other issues linked to the responsibility for any operations carried out by self-steering missiles or military dogs, completely autonomous entities that have accompanied man in combat throughout history. Naturally, it is essential to establish specific legal and procedural solutions to define these matters in a clear way. The deployment of combat robots does not seek to distance man from war’s reality; quite the opposite, thanks to military advancements, we are now able to bring humanity – replaced by artillery and cruise missiles – back into modern warfare. The old age notion that „you do not put a lawyer in every trench” ceases to be valid since teams composed of lawyers and ethicists are now able to monitor the real-time behavior of individual machines thanks to cameras that closely observe the behaviors of servicemen. Such innovations constitute the instance of strict controls that could not be imagined by thinkers and human rights activists even a generation ago.

Fighting battles with the use of remote-controlled machines – or robots programmed in comfortable air-conditioned offices prior to the launch of a conflict – deprive military officers of their traditional attitude, which states that „in the event of war, you had to choose between your life and other peoples’ lives”. The robot can be sacrificed to ensure greater security for civilians. This is of particular importance in some cases that tend to occur frequently in modern times, where the battle is equivalent to a huge-scale hostage situation, as exemplified by the liberation of the Iraqi city of Mosul from the Islamic State where fighters effectively used two million civilians as human shields. Robots are able to come closer, risk more and attack their targets with superhuman precision, thus avoiding civilian and military casualties. They are able to target an opponent’s weaponry or cause injuries that will eliminate the enemy from the fight but will not kill him or permanently damage his body. In the long term, there might not even be any human casualties. When taking into consideration the helplessness of a human soldier, confronted with fully developed autonomous weapons, it may be concluded that wars would ultimately become a kind of technological match-up between armies of robots. Such a vision might correspond to the anachronistic dreams in which decision makers settle their military disputes by playing a game of chess.

Are all these fears expressed by the critics ungrounded? Quite the opposite; even though they are perfect tools for waging wars in the most just and humane way possible, combat robots will act accordingly to all methods, goals, and values of their human masters. Nevertheless, machines that seek to respect both moral issues and provisions of international law will be used by the state to maintain peace or to conduct effective humanitarian interventions so as to defend basic human rights while other entities may deploy the very same instruments in a radically different manner. This aspect was recalled in the famous open letter written by scientists and technologists (including Hawking, Musk, Wozniak, and Chomsky) who advocated for the introduction of a global ban, stating that, „in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group”[18]. The popularization of such weaponry in the armed forces of rogue states and terrorist groups would potentially lead to a major catastrophe. In our joint article on arms proliferation, Dr. Wojciech Bober and I aim to show that the strategy of non-intervention in the world’s distribution of and access to this unprecedented technology is equally bad and wrong, while efforts to draw up a treaty that would prohibit its further development is simply politically naïve[19]. So what are the correct actions that can be taken and how do they fit into the overall strategy of implementing both the rights and interests of Poland and allied countries of a similar geopolitical position?

Combat robot and the Polish issue

This type of strategy not only exists, but is actively being implemented by Poland’s allies. States such as Latvia, Finland and – most importantly – Estonia are conducting intensive work in order to robotize their armed forces; their efforts have already resulted in creation of the first machines that are able to fight in the human-in-the-loop mode[20]. Countries that face the same strategic opponent as Poland and are unable to compete with the enemy on the basis of population, consider robotization a development to compensate for such disparities as well as a way to keep up with Russia’s military advancement. At the same time, these countries are pursuing a wise and prudent policy of promoting these solutions and recommending them to other European states. Officially, research work done by Estonia does not entail combat autonomy; instead, it seeks to focus on developing its independent and non-controversial perception and movement abilities that can be referred to as the major and most difficult step towards real combat autonomy. Such a balanced path would grant engineers, ethicists, and lawyers some extra time to introduce regulations, doctrines, and procedures allowing for either partial or full deployment of autonomous weapons in accordance with moral and legal principles.

All determinant factors behind the actions performed by the Baltic States seem valid also in the case of Poland, whose greater industrial capabilities and recognizable status in the international arena make the country a natural candidate to assume a leading position in the field of innovative military technologies. Warsaw should, therefore, get involved in all of the above-mentioned activities as soon as possible while aiming to become a leader in the field. Yet such initiatives may not be limited to the robotization of both Polish and allied forces. The Robotic Revolution in Military Affairs brings about a real chance to establish joint European armed forces increasing overall strength and efficiency. Forming fully robotized units from scratch so that all combat tasks are carried out by machines may remove a number of barriers preventing common European infrastructure from being established. Therefore, the natural reluctance of French, Spanish or Italian politicians to deploy their troops to other countries would no longer come into play, additionally accompanied by an increased certainty that allied forces would try their best to meet all commitments. A similar solution would provide wealthy Western and Southern allies with a possibility to back the eastern military flank with the use of financial resources and technical expertise only, thus making the considerable political changes of the project crucial for Poland’s security. This is why this issue should become the object of both intense and long-term activities carried out by Polish diplomacy.

Georgia USA, July 17, 2013. Cigdem Cakici with the Turkish company Aselsan, uses a common gaming controller to operate a Kaplan unmanned ground vehicle during a training demonstration at Fort Benning. © Erik S. Lesser (PAP/EPA)

The Robot Revolution in Military is happening before our very eyes. Russia is taking some decisive steps in order to gain an advantage in the sphere of conventional warfare and to ensure the possibility to perform military operations that will not be burdened with casualties. Efforts aimed at banning autonomous weapons, which was doomed to fail from the very beginning, suffered a definitive defeat in Geneva. Poland should understand and accept the emerging military reality and take steps to find itself in the forefront of countries whose armies continue developing safe and ethical robotic military technologies. Its potential to change the image of war and to increase the security on NATO’s eastern flanks makes the robotization of our armed forces both a moral and political duty, as well as providing an opportunity to establish a politically acceptable and truly effective common European defense force. This defense force would be expected to combine the best human and machine acumen constituting a new moral and military precept on the battlefields. In addition, this would help enable Europe to face both the existing and emerging challenges in the upcoming era of rapid technological progress.

On September 12, after the filing of this article, the EU parliament passed a resolution calling on a universal ban on autonomous weapons with 82% voting in favor. This clearly demonstrates the potential danger of Europe freely electing to damage its own strategic standing with self-imposed and ultimately ineffective restrictions, which leaves the most vulnerable member countries even more vulnerable. Poland needs to swiftly move against this troubling wave of well-meant political naivety[21].

 


[1] J. Delcker, US, Russia Block Formal Talks on Whether it is Ban ‘Killer Robots’, January 9, 2018, „Politico.eu”, www.politico.eu/article/killer-robots-us-russia-block-formal-talks-on-whether-to-ban/, (accessed: September 1, 2018).

[2] United States of America, Humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems, Statement before Group of Governmental Experts to the CCW, Geneva, 9–13 April 2018, CCW/GGE.1/2018/HR.4, (accessed: September 1, 2018).

[3] E. B. Kania, China’s Strategic Ambiguity and Shifting Approach is Lethal Autonomous Weapon Systems, April 17, 2018, „Lawfare Blog”, www.lawfareblog.com/chinas-strategic-ambiguity-and-shifting-approach-lethal-autonomous-weapons-systems, (accessed: September 1, 2018).

[4] K. Mizokami, Russia’s Tank Drone Performed Poorly in Syria, June 18, 2018, https://www.popularmechanics.com/military/weapons/a21602657/russias-tank-drone-performed-poorly-in-syria/, (accessed: September 1, 2018).

[5] The scenario in question is no longer part of the military science fiction genre. Self-learning algorithms are now capable of defeating tenured pilots on stimulators used for air combat training – Genetic Fuzzy based Artificial Intelligence for Unmanned Combat Aerial Vehicle Control in Simulated Air Combat Missions, Ernest et al., „J Def Manag” 2016, 6 (1).

[6] Lt. J. Hanacek, The perfect can wait; good solutions to the ‘drone swarm’ problem, „WarOnTheRocks.com”, August 14, 2018, https://warontherocks.com/2018/08/the-perfect-can-wait-good-solutions-to-the-drone-swarm-problem/, (accessed: September 1, 2018).

[7] Lt. C. D. Pinion, The Navy and Marine Corps Need to prepare for the Swarm of the Future, WarOnTheRocks.com, March 28, 2018, https://warontherocks.com/2018/03/the-navy-and-marine-corps-must-plan-for-the-swarm-of-the-future/, (accessed: September 1, 2018); T. X. Hammes, Cheap Technology Will Challenge US Tactical Dominance, March 29, 2016, https://ndupress.ndu.edu/JFQ/Joint-Force-Quarterly-81/Article/702039/cheap-technology-will-challenge-us-tactical-dominance/, „Joint Force Quarterly” 81,2nd Quarter 2016, (accessed: September 1, 2018).

[8] Major J. Hurst, Robotic Swarm in Offensive Maneuver, October 2017, https://www.researchgate.net/publication/320273069_Joint_Force_Quarterly, „Joint Force Quarterly” 87, 4th Quarter 2017, (accessed: September 1, 2018).

[9] Mandate forces, managed by a specialised corps of specialists being subordinate to international agendas, could somehow be a solution to a trap that countries with the highest level of poverty and the lowest degree of technological and institutional progress tend to fell into as they provide their citizens and state institutions with substantial protection without the need to make the latter dependence on any whims of local military clique. This mechanism, as one of the main factors behind the perpetuation of world poverty, is identified by P. Collier in his book The Bottom Billion; Why the Poorest Countries Are Failing and What Can Be Done About It (Oxford University Press, New York 2007).

[10] U.S. Department of Defense, Unmanned Systems Integrated Roadmap: 2017–2042 (2018), www.documentcloud.org/documents/4801652-UAS-2018-Roadmap-1.html#document/p1; E. B. Kania, Battlefield Singularity: Artificial Intelligence, Military Revolution and China’s Future Military Power, November 28, 2017, https://www.cnas.org/publications/reports/battlefield-singularity-artificial-intelligence-military-revolution-and-chinas-future-military-power, Center For New American Security, (September 1, 2018).

[11] J. Altmann, F. Sauer, Autonomous Weapon Systems and Strategic Stability, „Survival” 59 (5), p. 124.

[12] Ibidem, pp. 124-125.

[13] G. A. Pratt, Is a Cambrian Explosion Coming for Robotics?, „Journal of Economic Perspectives”, vol. 29 no. 3, Summer 2015, pp. 51-60.

[14] U.S. Department of Defense, Unmanned Systems Integrated Roadmap…, p. 25.

[15] B. Doherty, Ban ‘Killer Robots’ is Protect Fundamental Moral and Legal Principles, August 21, 2018, https://theconversation.com/ban-killer-robots-to-protect-fundamental-moral-and-legal-principles-101427, „The Conversation”, (accessed: September 1, 2018).

[16] In the following section I will base my conclusions on the works of ethicists dealing with topics of war and new technologies: K. Anderson, M. C. Waxman, Law and Ethics For Autonomous Weapon Systems. Why a Ban Won’t Work and How the Laws of War Can, Stanford University, The Hoover Institution Jean Perkins Task Force on National Security and Law Essay Series, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2250126; S. Kershnar, Autonomous Weapons Pose No Moral Problem. Killing by Remote Control. The Ethics of an Unmanned Military, (eds.) B. Strawser, J. McMahan, Oxford, Oxford University Press 2013, pp. 229-245; P. Lin, G. Bekey, K. Abney, Autonomous Military Robotics: Risk, Ethics, and Design, „Philosophy” 2008; G. Lucas Jr., Engineering, Ethics and Industry: The Moral Challenges of Lethal Autonomy. Killing by Remote Control. The Ethics of an Unmanned Military, (eds.) B. Strawser, J. McMahan, Oxford, Oxford University Press 2013, pp. 211-228.

[17] The discourse on autonomous weapons distinguishes three levels of human control: human-in-the-loop (with a human supposed to pull the trigger), human-on-the-loop (able to interrupt at any moment the fire taken up by a robot’s autonomous decision), and human-out-of-the-loop (unable to cease the fire, e.g. by lack of contact with the robot for the duration of the fight). Especially the last two solutions – with particular regard to the out-of-the-loop options – seem to arouse greatest ethical controversies. Nonetheless, all authors of alarmist editorials tend to forget that the army seeks to control any actions of their agents as closely as it is possible, thus not needing robots as a tool of blind and indiscriminate violence as militaries have disposed of a similar instrument in the form of dumb munitions (artillery or aviation) for hundred years, actively seeking to get rid of its remnants.

[18] Future of Life Institute (2015), Autonomous Weapons: An Open Letter from AI & Robotics Researchers, https://futureoflife.org/open-letter-autonomous-weapons/, (accessed: September 8, 2018).

[19] W. Bober, M. Zając, Autonomous Military Robots – critical topography of possible reactions to upcoming technological revolution, „Zeszyty Naukowe Politechniki Śląskiej – Organizacja i Zarządzanie” [„Research Bulletins of the Silesian University of Technology– Organization and Management”], no. 110, pp. 201–216.

[20] Estonian Defence Ministry, Estonia looking to develop unmanned land systems within the framework of European defense cooperation, www.kaitseministeerium.ee/en/news/estonia-looking-develop-unmanned-land-systems-within-framework-european-defence-cooperation, (accessed: August 2, 2018).
[21] European Parliament Passes Resolution Supporting a Ban on Killer Robots, September 14, 2018, https://futureoflife.org/2018/09/14/european-parliament-passes-resolution-supporting-a-ban-on-killer-robots/, Future of Life Institute, (accessed: September 24, 2018).

All texts published by the Warsaw Institute Foundation may be disseminated on the condition that their origin is credited. Images may not be used without permission.

Related posts
Top