Date: 15 February 2021

The Regulation of Artificial Intelligence – What Can America and the World Expect?

International measures aimed at developing artificial intelligence (AI) are not only the domain of global corporations. AI-based technologies are also used in the security sector. That is why they are a subject of transatlantic cooperation within the framework of NATO.

SOURCE: ENERGY.GOV

In February 2019, President Trump signed an Executive Order (no. 13859) to uphold the American leadership in the field of artificial intelligence. The initiative sets the tone for key areas, such as: R&D investment, guidelines and regulations as well as gaining competencies and engaging in international collaboration to support American research and innovation in artificial intelligence. In late 2020, at an event organized within the framework of the Future Europe Initiative, NATO Deputy Secretary General Mircea Geoană rightly noted that “there are considerable benefits of setting up a transatlantic digital community cooperating on Artificial Intelligence (AI) and emerging and disruptive technologies, where NATO can play a key role as a facilitator for innovation and exchange.” It is hard to disagree with these words. While European Member States can rely on EU bodies in terms of assistance, when it comes to transatlantic cooperation NATO seems to be a natural leader in an area that is already affecting security worldwide. Moreover, the US leadership in NATO can significantly help to create international standards on AI. Work in this field is much more advanced in the United States than in other countries. The phrase “Artificial Intelligence” alone appears in 117 acts as of today.

The first thing that comes to mind when considering the topic of AI security are the dangers associated with ominous machines that, commanded by an out-of-control artificial intelligence, threaten humanity. Nothing could be more wrong. Current risks are far from those depicted in popular science-fiction movies. This does not mean that they should be ignored. Already in 2016, during a conference on “The Ethics of Artificial Intelligence,” the issue of LAWS (Lethal Autonomous Weapons Systems) was raised. The author of the presentation pointed out that in some hotspots, for instance, the demilitarized zone between North Korea and South Korea (DMZ), such semi-autonomous weapon systems are already deployed. It is worth mentioning that the first prototypes of Samsung SGR-A1 sentry guns, which can track the target without human intervention, were manufactured 15 years ago. Despite the passage of time, transferring the moral problem of pulling the trigger of a rifle on a machine, continues to have its supporters and opponents.

Loss of life as a result of AI’s action is, of course, the greatest of the risks to the individuals. Unfortunately, there are also numerous other ones. The most commonly discussed threat is the prospect of losing one’s job to AI-equipped robots. The blurring of the lines between the digital, physical and biological worlds is already happening nowadays. Robots and digital programs are taking over responsibilities that can be automated using available AI solutions. McKinsey Global Institute published a study which concludes that 375 million people will have to change their current jobs by 2030 due to increasing automation. It is easy to estimate that this stands for less than 14% of the global labor market. The report of Oxford Economics entitled “How robots change the world” suggests that 20 million workers will be fully replaced by robots by 2030. It is no wonder that citizens fear that they may be deprived of jobs. The fact that the market will quite naturally change along with the development of AI does not convince everyone. As in the case of every industrial revolution to date, existing jobs will change, some of them will no longer be needed and new ones will emerge.

Support Us

If content prepared by Warsaw Institute team is useful for you, please support our actions. Donations from private persons are necessary for the continuation of our mission.

Support

Another potential risk is related to how our data is processed. First of all, most AI models need real data to learn how to work properly. Algorithms that suggest which movie we should watch next do not necessarily pick them on the basis of the recently viewed productions. The AI will surprisingly well select a film that is interesting for us by analyzing our social media activity, geolocation and preferences associated with our online purchases. The use of our data, even anonymized, is a problem within society. Another issue, with far-reaching consequences, is that private data are processed by machines and decisions that affect citizens’ lives are made by algorithms. AI models that were taught with improper input, can pose a considerable threat to those analyzed by them. There were cases of AI designed to detect potential criminals through the analysis of their facial contours and expressions that produced highly skewed results. Numerous predictive models, intended to assist the justice system in issuing sentences and bails, have consistently produced worse results for Black males. This is because of the starting data set. Furthermore, according to a recent study from the University of California, Berkeley, an AI-based mortgage lending system charged Black and Hispanic borrowers higher rates than White people for the same loans. When searching for the best employees on the basis of input data, Amazon’s recruiting tool selected mostly men. The reason was simple – the algorithm was based on data that included only the performance of male employees. There are more similar examples, but they all have one fact in common – bad input data will teach AI to make decisions that are not necessarily error-free.

AI can develop itself safely under two conditions. The first of them is funding. In this respect, EU countries are still lagging behind the United States and slowly catching up with China. In 2016, €3.2 billion was allocated to AI development in the EU, compared to about €12.1 billion in North America and about €6 billion in the Middle Kingdom. In 2019, for the first time in US history, the development of artificial intelligence and autonomous systems was identified as an R&D priority and specified in the administration’s budget. International Data Corporation, a market intelligence firm, predicts that global spending on artificial intelligence will more than double over the next four years, from €41 billion in 2020 to more than €90 billion in 2024.

The second factor includes appropriate regulations that will guarantee uniform technological development of allied countries and allow for anthropocentric orientation of algorithms, protecting our rights and values. In the USA, the first legal framework in the area of AI was introduced as early as 2011 – the state of Nevada passed a bill allowing for testing autonomous vehicles. Other forerunners of legal requirements can also be found in the guiding principles, approved in December 2019 by a group of government experts on new technologies, working under the auspices of the United Nations. The document, describing 11 standards, addresses Lethal Autonomous Weapon Systems (LAWS), but provides enough details to be applied also in the realm of civilian use of AI. The Ethics Guidelines for Trustworthy AI, formulated by the European Commission, are yet another example of already available rules. They include aspects such as: human agency and oversight, technical robustness and safety, privacy and data governance, diversity, non-discrimination, fairness and accountability, among others. Technology companies that did not wait for state institutions to act and formulated their own best practices much earlier should also be praised. Examples include “Asilomar AI Principles,” published in 2017, IBM’s “Trusted AI” or Google’s “Responsible AI Practices.” Currently, Polish legislation does not specify liability for errors of artificial intelligence in any particular way. This will change in the near future because Polish law is already being adapted to the corresponding EU resolution (adopted in October 2020). Transatlantic cooperation on this issue is of great importance because properly developed and implemented regulations are likely to have a positive impact on the common market and the security of AI-based systems.

Advanced AI algorithms are emerging before our eyes. In the coming years, they will significantly change the lives of the majority of people in the world. It is essential to prepare for these changes and to do so with the utmost responsibility. Cooperation at the state level and within organizations such as NATO seems to be one of the pillars of the upcoming transformation.

This article was originally published on “Polish Daily News” and “Polish Express”.

Author: Wiktor Sędkowski
Wiktor Sędkowski graduated in Teleinformatics at the Wrocław University of Science and Technology, specialized in cybersecurity field. He is an expert on cyber threats. CISSP, OSCP and MCTS certificates holder. Worked as an engineer and solution architect for leading IT companies.

 _________________________________

All texts published by the Warsaw Institute Foundation may be disseminated on the condition that their origin is credited. Images may not be used without permission.

TAGS: 

 

Related posts
Top