Modern militaries are way beyond reliance on hand-loaded mortars and Howitzers. Those old-fashioned ways of lobbing explosive charges tend to reveal the location of the troops using them. Protecting our troops and civilians in or near the battlefield is best accomplished through a mix of conventional and high-tech tools. A U.S. Army spokesperson says “allowing artificial intelligence (AI) to control some weapons systems may be the only way to defeat enemy weapons.”

The next wave of technology in weapons comes in the form of autonomy. An autonomous weapon can decide the when and to some extent the how to execute its mission. This evolution is more than allowing the device to adjust a time-delay fuse and to select alternate target coordinates.

AI will further supercharge a weapon’s IQ allowing it to make superior tactical or strategic decisions. For example, the AI weapon could choose the best sequence for attacking targets, or the best targets to attack, or whether hiding for a few minutes will reveal even better targets. The challenge for keeping autonomous AI weapons within the accepted safety parameters (for our troops and civilians) is difficult, perhaps impossible. When a drone is circling above a battlefield, it can see ever-changing patterns and could mistake civilians for enemy combatants. AI weapons are likely to deliver increased battlefield efficacy and safety for our troops, but they will incur high initial development costs, and perhaps lower operations costs.

“The UN’s Convention on Certain Conventional Weapons (CCW) has been discussing autonomous weapons for five years, but there is little agreement” on what is acceptable. U.S. Armed Forces have taken a “conservative approach to AI, one that involves a human in the decision-making process for the use of deadly force”. Developing a generally accepted set of ethical principles for an AI weapon will be initially difficult, but once political acceptance is achieved, infusing ethics into the AI weapon will be easy. The nagging concern for military leaders will be whether all sides in combat are playing by the same rules. Overpowering cheaters will call for aggressive use of AI weapons.

We are in the early stage of imposing ethics in AI systems. Civilian AI systems have raised consternation because some have already “been trained to bypass advanced antimalware software,” creating a huge security risk in the future. Military AI systems will likely engage in cyber-combat against their enemies’ communications and control systems, so breaking through malware is an almost mandatory capability of AI weapons.

Autonomous weapons infused with artificial intelligence are probably at an advanced stage already for modern militaries. We cannot expect public revelations on the state of development that our military has achieved for its autonomous intelligent weapons.

Street protesters have not yet caught up with the state of art for military weapons. They seem stuck on global warming, identity politics and the lurid examples from experimental U.S. weapons that can “vaporize human flesh”. Notably, they voice no objections toward Russia’s Peresvet 50 kilowatt laser gun. Eventually, the protesters will discover autonomous intelligent weapons, and when they do, it may be a shock sufficient to launch a modern-day version of the 1960s “Ban the Bomb” movement.

In contrast, Americans will benefit from an uptick in security and safety for our troops, and eventually consumers will see some cost savings in the federal budget. Regardless, the migration to AI weapons seems inescapable.

Share: