fbpx

We’ve reached an age where autonomous robots could undoubtedly one day carry out their own military missions. Everyone talks about the dangers of AI taking jobs, but risking soldiers overseas is one job few could complain about giving over to inanimate life forms. But the future of the autonomous artificial intelligence remains unknown. How much should soldiers and civilians trust machines to carry out missions? I for one was raised on the Terminator movies. However, the military seems to have forgotten about Schwarzenegger’s fight to protect human life from his evil AI buddies, as funding to integrate artificial intelligence into the military is in full swing.

TL;DR

  • $2 billion has just been added to the United States defense budget to widely integrate AI into the military.
  • The Pentagon also currently funds several AI development projects to catch up with competing projects, including Project Maven and the Booze Allen Hamilton contract.
  • Canada recently tested “Sapient” to scan cities for enemy movement, but alternative uses for the technology are alarming.
  • For years, researchers have warned of the ethical dilemmas behind trusting technology with human tasks and decision making.
artificial intellegence

Source: geralt | pixabay

$2 Billion Dollars

On the 60th anniversary of the Defense Advanced Research Projects Agency (DARPA), the group announced a $2 billion addition to the U.S. defense budget. This money will fuel the effort to make military commanders more comfortable with AI (artificial intelligence) systems. This sudden increase in funding comes from pressure from overseas, including the usual suspects, China and Russia. Earlier this month, the U.S. and Russia oddly found themselves sitting on the same side of the table facing most of the UN. 26 other countries supported a ban on weapons that use artificial intelligence to choose targets. Meanwhile, the new addition to the US defense budget will fuel the effort to make military commanders more comfortable with AI systems.

Current AI Projects

Anyone who knows of America’s intense defense budget (nearly $600 billion in 2018) recognizes that $2 billion dollars could hardly be defined as starling; however, let’s not forget about the billions more being spent on AI every year. Additional contracts recently distributed by the American government include Project Maven and Booz Allen Hamilton. Such projects consist of $93 million for target discrimination and $885 million to “undescribed artificial intelligence programs” in the next five years respectively.

Project Maven

That’s right, Project Maven is still a go. Some may find this a surprise after the uproar the experiment caused earlier this year. (Not that Google employees brazenly rioting against an unsound project surprises anyone anymore). For anyone not tuned into the extreme politics of tech nerdism, Project Maven combines the familiar ability of a self-flying drone with the developing trend of teaching AI to choose their targets for themselves. So now tiny spy planes can shoot whoever they deme a threat. Well…almost. These systems, with their technique known as target discrimination, have yet to be given full autonomy of their power. But remember, in the Rise of the Machines, Skynet didn’t ask permission before using everything she learned to destroy human civilization.

All jokes aside, Defense One’s Global Business Editor Marcus Weisgerber gave some insight into Project Maven back in March. Decide for yourself if this tech constitutes as cool or creepy:

Sapient

America isn’t the only country spending money on smarter robots. Recently, Canadian soldiers conducted a military exercise in Montreal to test the British technology known as ‘Sapient.’ This Bond-villain sounding name designates an AI capable of scanning an urban environment for threats and enemy movement. The technology consists of sensors loaded onto planes above the city. Giving the risky business of scouting for enemies to a brilliant computer, rather than human-error-prone soldiers, sounds like a great way to protect troops. However, unlike Project Maven, Sapient acts autonomously. This means that the AI sending live feeds of people moving about streets chooses for itself which denizens should be flagged as threats, then decides on its own to send that particular information to its home base. While the robot takeover would probably spare the benevolent Canadians, Sapient runs will soon be conducted in the US and the UK.

AI Ethics

 

self-driving car

Source: Giphy

The military may seem like an obvious target for criticizing the use of AI, but several industries have become increasingly dependent on this type of technology. Driverless cars, for example, use AI to make countless decisions. This includes the outcome of an inevitable crash, in which the self-driving car chooses to crash into a pedestrian, saving its driver, or crash into a medium, protecting the unknowing pedestrian in risk of killing its owner. The decision here rests on whether humans should be deciding the fate of human lives, or if the burden to choose should be put on a computer.

As Nick Bostrom and Eliezer Yudowsky eloquently warned in their paper “The Ethics of Artificial Intelligence” for Cambridge University, “[When] AI algorithms take on cognitive work with social dimensions—cognitive tasks previously performed by humans—the AI algorithm inherits the social requirements.” Their example here consisted of banks hypothetically using AI to determine if mortgage applicants should be accepted. While the system would be put in place to eliminate racial bias, the AI could end up using past street addresses to determine if an applicant had resided in urban poverty-stricken neighborhoods. A majority of these, therefore, denied applicants would disproportionately be African American or Latino, thus reinforcing society’s systematic racism. Just because a computer is making a decision, doesn’t mean it isn’t biased. It simply won’t be aware of its own bias because computers don’t wonder why patterns develop.

Humanity’s Responsibility

Weaponizing AI would mean crossing a dangerous line. What might ambitious military leaders choose next to attach to a self-piloting drone? Governments around the world already insist that cell phone privacy should be breached in case of a terrorist threat, but this same excuse could be used to let Sapient scan any city around the world. Citizens might feel safer with an intelligent computer watching their backs, but this latest rendition of 1984 would go against all privacy rights.

Artificial Intelligence

Source: South China Morning Post

 

Humans too comfortable with technology making its own decisions, too trusting of an unfeeling problem-solver, risk dehumanizing violent acts altogether in the name of innovation. Autonomous computers being allowed to choose which targets to eliminate may sound convenient, but “inhuman” has earned negative connotations for good reason.