Take a burn victim, a shoplifter and a warmonger. Those who have been burned seek relief, those who shoplift seek wealth, but those who seek conflict on a global scale rejoice — with missiles held high and arsenals worldwide — at the recognized dawn of artificial intelligence.
The development and deployment of AI in decision-making for weapons systems pose significant legal challenges and opportunities. Without an impartial and data-driven supervisor, the margins of conflicts balloon out of proportion. As a result, numerous human lives, tons of critical infrastructure and enormous economic fluctuations are at stake.
The United States is a de facto participant — actively and passively — in major conflicts in the last century, and it begs a variable reality where the liability of collateral damage to military operators evaporates.
When a bombing run goes off, one of the legal challenges is the allocation of responsibility and liability for the actions and outcomes of AI-driven weapons systems. As AI technology advances, in the possible future, weapons systems may become more autonomous, leading toward reduced human liability, distress and cost.
Who is legally responsible for the actions and outcomes of autonomous weapons?
Get The Daily Illini in your inbox!
Depending on the degree of autonomy and human control over the weapon system, different positions could bear legal responsibility, such as the user, the commander or the developer or the state.
However, assigning responsibility and liability smooths a logistical bottleneck, with the ruthless predictability and operational simplicity of autonomous weapons systems and the abundance of transparency and accountability mechanisms.
AI can enable faster and more comprehensive processing of large amounts of data from various sources, such as sensors, cameras, drones and satellites, to name a few. This can provide incredibly valuable information and evidence for military decision-making, but it can also raise legal quandaries about the how to safeguard the privacy and the security of the data collected.
How can we ensure the admissibility and reliability of this data?
A third legal benefit is the preservation of human dignity and values in warfare. AI can potentially reduce human suffering and casualties by minimizing collateral damage, enhancing precision and avoiding unnecessary violence.
In addition, the presence of humans in warfare promotes dehumanizing practices, reduces human empathy and compassion and erodes moral responsibility.
We cannot allow nascent operators to maintain control of processes.
Therefore, it is essential to develop and implement a legal framework that regulates the development and deployment of AI-driven weapons systems in a way that respects legal principles and safeguards human rights.
Society rests on the clear establishment of standards for the design, testing, verification and validation of AI-driven weapons systems, while incorporating meaningful human involvement and oversight in the use of AI-driven weapons systems, not only in the United States but also abroad.
Harrison is a senior in LAS.