Discussion Prompt:
Start this discussion by reading the article “Autonomous Weapon Systems and US Military Robotics ”
As an experienced programmer you’ve seen how difficult it is to fully debug complex programs, such as some of the AI applications that we worked on in this course. What is your opinion on the desirability of any military producing autonomous weapons of war? Explain why you’re for or against this particular application of AI.
Response:
The adoption of robotic remote and automated machines is undoubtedly one of the most significant advancements in modern warfare and will remain at the forefront for years to come. I believe the greatest benefit to come from this is the opportunities that unmanned drones have provided regarding the ability to keep people out of harm’s way, such as for scouting and ordinance disposal. Technology such as UGVs that fulfil logistical functions like transporting equipment and wounded are also a positive result of these developments. Applying AI to platforms such as these could help to further their life-saving ability. Concern does arise regarding the potential for accidents that occur as a result of programming bugs but the same issue is present in those devices that are not AI driven. At the same time, I believe that when we look to give AI controlled machines the ability to kill people this presents a moral problem.
To me, a significant difference exists between a missile equipped drone controlled by an operator in a command center or in the field and one without human input that uses AI to identify and engage targets. Artificial intelligence coupled with technology designed for the express purpose of causing harm becomes a twofold point of failure, where there are not only points of failure in the device that is specifically carrying out the action but also in the decision making process of the AI. We might say that AI would be less prone to error than a human controller and it likely is true that a computer could more reliably visually identify targets than a person. Even so, because computers are designed and programmed by people who make errors, there will never be a system with one-hundred percent reliability. Furthermore, I believe it is much harder to “hack” a human than to hack a computer for nefarious purposes, but that is a complex and separate discussion and so I will avoid exploring it here.
Attempting to absolve ourselves of the responsibility for killing by passing on the act to something with no inherent moral agency makes those of us responsible for doing so more guilty for the lack of regard such an approach shows for human life. As long as humanity as we know it continues there will be those who seek to do evil and conflict as a result. Therefore, I believe we should seek to resolve it with as little harm and loss of life as possible. Whether a soldier pulls the trigger of a gun or the trigger of a drone’s joystick a moral decision is made to end the life of another human. I think that we would be kidding ourselves to believe that just because a person isn’t physically pulling the trigger that we become innocent of the consequences. To set loose AI that can decide to end the life of a human, no matter the restrictions set, is still indiscriminate for we have given it that ability. As far as I am concerned, no artificial intelligence or otherwise autonomous device should be given the agency to make an independent, intentional decision meant to end the life of a human.






































