Zakariya Rahimi
What Are AI-Powered Lethal Autonomous Weapon Systems
Lethal autonomous weapon systems (LAWS) are defined by the DOD as “A weapon system that, once activated, can select and engage targets without further intervention by an operator”.[5] The DoD has also described multiple tiers of autonomy for lethal autonomous weapon systems.[6] The lower tier of semi-autonomous weapon systems is defined by the DoD as “Semi-autonomous weapon systems, which require human operator selection and authorization to engage specific targets (e.g., human-in-the-loop control)”.[7]
A human-supervised lethal autonomous weapon system is defined by the DoD as “Human-supervised autonomous weapon systems, which allow human intervention and, if needed, termination of the engagement, with the exception of time-critical attacks on platforms or installations (e.g., human on-the-loop control)”.[8] A fully autonomous lethal weapon system is a weapon system that can and will operate without any human intervention or supervision. All lethal autonomous weapon systems are powered or controlled by artificial intelligence.
How Can AWS Fail
There are various risks of AWS, one of which is the chance of accidental escalations or conflict. AWS like human beings are not infallible and are capable of failures, glitches, malfunctions, and mistakes. These failures, glitches, malfunctions, or mistakes can take many different forms such as engaging with supposed targets when they were not supposed to, crossing a border during a tense period, killing civilians, visibly targeting opponents, etc.
No comments:
Post a Comment