18 May 2023

US Legislative Initiatives to Tackle AI Risks

In a landmark gathering at the White House, some of the most influential CEOs and leaders in artificial intelligence (AI) convened for a crucial discussion on responsible innovation and mitigating AI risks.  The meeting, organized to foster collaboration and cooperation between the government and the private sector, brought together renowned figures from technology giants and AI-focused companies.

In the series of articles we present to you, we will deal with the US legislative initiatives behind the above meeting, which concern various risks of using AI.

Block Nuclear Launch by Autonomous AI Act

In recent years, the rapid development of AI has raised concerns about the potential for these systems to be used for malicious purposes, including the control of nuclear weapons.  To address these concerns, US lawmakers have introduced the Block Nuclear Launch by Autonomous AI Act of 2023 (the “Act“).

The proposed legislation seeks to ban the use of autonomous AI systems in the decision-making process for the launch of nuclear weapons.  This would require humans to be involved in any decision to launch a nuclear weapon rather than allowing AI to make that decision independently.

The key concern with using autonomous AI in the context of nuclear weapons is the potential for these systems to make decisions outside the bounds of what is considered acceptable or ethical.  For example, an AI system may interpret a situation in a way that leads it to conclude that a nuclear launch is necessary, even when it is not, resulting in immeasurable damage and loss of life.

By requiring human operators to be involved in the decision-making process regarding nuclear weapons, the Act seeks to prevent the abovementioned scenario from happening.  On the other hand, despite the Act’s requirement that a human operator makes every decision to launch a nuclear weapon, it would also forbid using autonomous AI systems in the decision-making process.

In addition to these requirements, the Act also includes provisions for ensuring that any AI systems used in the context of nuclear weapons are subject to rigorous testing and evaluation.  This would ensure that these systems are reliable and accurate and do not pose a risk of malfunction or unintended consequences.

Finally, the proposed Act requires AI developers and users to comply with guidelines designed to prevent AI from being used for military purposes, particularly in the context of nuclear weapons.  The policies cover various topics, including data security, transparency, and ethical considerations.

One of the challenges in regulating AI and nuclear weapons is the complexity of both fields.  AI systems are incredibly complex and often difficult to understand, while nuclear weapons technology is highly specialized and heavily regulated.  This makes it challenging to develop regulations that are both effective and practical, which, given the importance of this topic, is precisely the reason for this legislative initiative.

The bill has received support from stakeholders, including lawmakers, AI researchers, and nuclear disarmament advocates.  Supporters argue that the Act is necessary to ensure that AI is developed and used responsibly, particularly in the context of atomic weapons.  Of course, some oppose the Act, claiming that its adoption could lead to the stifling of innovation in the field of AI and hinder the development of new technologies.  Individuals also argue that the Act is unnecessary since many existing regulations and international agreements already prohibit the development and use of nuclear weapons.

In conclusion, the Act represents a significant step forward in regulating AI and ensuring it is used responsibly.  As AI technology expands, it is essential that, on a global level, lawmakers and stakeholders work together to ensure that AI is used only for peaceful purposes and does not contribute to the development or use of nuclear weapons.

 

 

Author: Luka Đurić