U.S. WEEKLY offers an in-depth analysis of various geopolitical processes that have direct effect on US’ domestic and foreign policies. This particular analytical column is possible thanks to the cooperation with polish media abroad: Dziennik Związkowy – Polish Daily News, Polishexpress of United Kingdom and WIrlandii.pl of Ireland
Date: 2 August 2021
Artificial Intelligence on the Battlefield
The use of artificial intelligence (AI) in decision-making systems and conventional solutions of the military is no longer surprising. What is eye-opening, however, is the possibility of using it without human supervision. When Deep Blue computer won a chess game against world champion Garry Kasparov in New York City in 1997, most experts perceived this event as a sign that AI is catching up with human intelligence.
Wall-Defender is a computer game that only older readers or die-hard fans of retro games may remember. Created in 1983 for the Atari 2600 platform, the game allowed the player to take on the role of a fortress defense commander. An 8-bit processor, 128 bytes of RAM, and technical limitations related to graphics display left a lot of room for the player’s imagination. Today’s computers, equipped with incomparably more powerful processors and graphics chips can do much more. A “Wall-Defender” in 2021 does not display a bunch of pixels simulating an enemy, leaving the battle with a virtual opponent to the player. The “Wall-Defender” in 2021 fights a real enemy in real-time.
Operation Guardian of the Walls began after unrest in Jerusalem led to rockets being fired from Gaza into Israeli cities. The first firing of 38 missiles occurred on April 23 and 24. Between May 10 and 18, more than 3,440 rockets were fired toward the cities of Sderot, Ashkelon, Ashdod, and Jerusalem. The Iron Dome defense system intercepted more than 90% of the rockets that were heading towards the inhabited areas. It was designed to detect as well as destroy rockets and 155 mm artillery shells. The Iron Dome is effective against missiles fired from up to 43 miles away. Using AI algorithms, the system analyzes whether a detected series of short-range rockets and missiles threatens the population or critical infrastructure of the country. If the AI system identifies such a threat, it automatically sends a command to launch an interceptor missile that neutralizes the threat at high altitude or in an uninhabited safe area. The system is also operational in the US since 2021 – the US Army has recently activated two Iron Dome defense batteries at Fort Bliss, TX. They are planned to be used in military exercises later this year and become fully operational in 2023.
Operation Guardian of the Walls was not only about using the existing defense system – during the conflict the Israeli army used numerous innovative solutions. Consequently, many experts  claim that the fight turned into the world’s first AI-led war. Israel planned its attacks on Gaza with the use of AI. Huge amounts of data collected from open-source intelligence (OSINT), geospatial intelligence (GEOINT), and human intelligence (HUMINT) were analyzed in order to carry out the so-called precision attacks. On the basis of satellite images, information from sensors, radars, UAVs, and other sources, the Israeli army was able to obtain accurate 3D images of the Gaza Strip as well as identify the locations of rocket launchers installed by Hamas. Moreover, the AI identified the types of weapons at the enemy’s disposal and, on the basis of the acquired data, determined, in real-time, the safest routes for ground troops operating near the front line. “For the first time, artificial intelligence was a key component and power multiplier in fighting the enemy,” an Israel Defense Forces (IDF) Intelligence Corps senior officer said in a commentary, a part of the Jerusalem Post article.
Additionally, this report highlighted the importance of the work of soldiers in Unit 8200, who authored the algorithms and code for advanced programs called “Alchemist,” “Gospel,” and “Depth of Wisdom,” which were developed and used during the combat. “Gospel” helped the IDF military intelligence division to improve intelligence recommendations and identify key targets, which were then passed on to the air force to carry out attacks. The Israeli military says that AI helped to shorten the duration of battles as well as increase the effectiveness and speed of detecting targets of the attacks thanks to the super-cognition technology (recognition carried out by AI). According to it “Israel had managed to achieve more in 50 hours of fighting than in the 50 days of the war in 2014.”
Indeed, the 2021 Israel-Palestine crisis can be called the first AI-led war, but it is worth noting that AI has long been present on the battlefield. UAVs, fire-control systems (FCSs), drone swarms; means of reconnaissance, data exchange, and aggregation – these and multiple other elements are effectively used not only on the training grounds, but also during actual clashes. The conflicts in Syria or Ukraine could serve as an example.
The use of AI in decision-making systems and conventional solutions of the military is no longer surprising. What is eye-opening, or rather intimidating, is the possibility of using it without human supervision. When Deep Blue computer won a chess game against world champion Garry Kasparov in New York City in 1997, most experts perceived this event as a sign that AI is catching up with human intelligence. Subsequent scientific analyses have played down the significance of this event, trying to prove that Kasparov lost because he played the game “uncharacteristically badly.” What is more, the intellectual value of chess as a game was questioned, arguing that the computer can plan it by using brute force algorithms (checking every possible solution). Another clash in 2020 showed that humans are no longer a worthy opponent for well-trained AI.
In a simulated F-16 dogfight between a US Air Force pilot, a graduate of the F-16 instructor course at the Air Force Weapons School with over 2,000 hours of flight experience, and an AI developed by Heron Systems, the human pilot lost 5-0 in five different maneuver scenarios.
Everything indicates that AI will play an increasingly significant role during armed conflicts. There is no doubt about it. Therefore, the limitations that decision makers have to place on AI systems are extremely important. Their growing autonomy requires taking appropriate precautions and, above all, prudence from the designers and operators of such systems.
Author: Wiktor Sędkowski
Wiktor Sędkowski graduated in Teleinformatics at the Wrocław University of Science and Technology, specialized in cybersecurity field. He is an expert on cyber threats. CISSP, OSCP and MCTS certificates holder. Worked as an engineer and solution architect for leading IT companies.
If content prepared by Warsaw Institute team is useful for you, please support our actions. Donations from private persons are necessary for the continuation of our mission.
All texts published by the Warsaw Institute Foundation may be disseminated on the condition that their origin is credited. Images may not be used without permission.