A division of the U.S. Army responsible for Contracting (Army Contracting Command), suggested potential industrial contractors and research teams to submit ideas to improve its Advanced Targeting and Lethality Automated System (ATLAS).
According to the solicitation, the Army has a desire to leverage recent advances in computer vision and Artificial Intelligence / Machine Learning (AI/ML) to develop autonomous ground-combat vehicles.
This is “another significant step towards lethal autonomous weapons”, warns Stuard Russel, a professor of computer science at UC Berkeley and AI expert. He expressed his concerns with this initiative and warns of the potential danger of developing fully-autonomous lethal weapons.
The Army’s call says that ATLAS will employ Artificial Intelligence algorithms to detect and identify targets and only parts of the fire control process will be automated. According to the Defense Department, the idea is to “maximize the amount of time for human response and leave the decision to the human operator”. Moreover, they expect AI algorithms to reduce the possibility of civilian deaths and other unwanted consequences.
Developing automated lethal weapons is the biggest fear coming with the advancement of Artificial Intelligence. Many organizations worldwide have started programs and campaigns for banning autonomous weapons. However, many governments, defense projects, and companies still continue to invest in autonomous weapons development despite the numerous warnings of the dangers and potential consequences of this technology.