Why ‘killer robots’ are neither feminist nor ethical

Autonomous weapons do not align with Canada’s current approach to foreign policy — and, as a new survey shows, Canadians’ opposition to their use is on the rise. Is it time for an outright ban?

By: /
22 January, 2019
Credit: Campaign to Stop Killer Robots
Erin Hunt
By: Erin Hunt

 Humanitarian disarmament expert

Six out of 10 Canadians responding to a recent poll opposed the development of weapons systems that would select and attack targets without human intervention (commonly known as autonomous weapons systems or killer robots). The poll results, released Tuesday, show that only 15 percent of Canadian respondents supported the use of such weapons, while 25 percent were not sure.

These types of weapons might sound like something from a sci-fi movie but the survey and the context it was prompted by are very real, and the Canadian government should being paying attention.

The survey was commissioned by the Campaign to Stop Killer Robots and was conducted by market research company Ipsos in December across 26 countries: Argentina, Australia, Belgium, Brazil, Canada, China, Colombia, France, Germany, Great Britain, Hungary, India, Israel, Italy, Japan, Mexico, Netherlands, Peru, Poland, Russia, South Africa, South Korea, Spain, Sweden, Turkey and the United States. The Canadian results were close to the global results where 61 percent of respondents said they oppose the use of lethal autonomous weapons systems, while 22 percent support such use and 17 percent said they were not sure.

Such opposition appears to be on the rise. In a near-identical survey by IPSOS in January 2017, 55 percent of Canadian respondents were opposed to autonomous weapons. The increase in Canadian opposition from 55 percent to 60 percent over the past two years mirrors a worldwide increase in opposition from 56 percent to 61 percent.

While public opposition to autonomous weapons has been growing, so has international attention. This month, 4,000 Google employees were named Arms Control Person of the Year for urging the company to not be “in the business of war.” The Campaign to Stop Killer Robots, which was co-founded by Mines Action Canada, where I work, was launched in 2012 in the face of growing concerns among tech experts and humanitarian actors about ongoing efforts to develop autonomous weapons. It has grown to include 88 non-governmental organizations in 50 countries advocating for a pre-emptive ban on autonomous weapons systems.

In November, UN Secretary-General Antonio Guterres called lethal autonomous weapons systems “politically unacceptable and morally repugnant” and urged states to prohibit them. Also in November, at the annual meeting of the UN’s Convention on Conventional Weapons in Geneva, the states parties to the convention decided to continue diplomatic talks on killer robots, but that process has no clear objective or timetable for negotiating new international instruments to address these concerns.

Outside the disarmament community, roboticists, artificial intelligence (AI) experts and scientific leaders have also been voicing their concerns. In November 2017, over 200 leaders in AI from across Canada signed an open letter to Prime Minister Justin Trudeau urging the government to “take a strong and leading position against Autonomous Weapon Systems on the international stage.”

But in light of this most recent public opinion data and the ongoing international work, where is the Canadian government on autonomous weapons? The short answer is nowhere special. I hope the longer answer is continuing internal discussions between Global Affairs Canada, the Department of National Defence, and Innovation, Science and Economic Development Canada. Canada has participated in the Convention on Conventional Weapons meetings since they began in 2014 and this government’s defence policy, Strong, Secure, Engaged, states that “The Canadian Armed Forces is committed to maintaining appropriate human involvement in the use of military capabilities that can exert lethal force.” Canada’s statements at the UN have often focused on international humanitarian law and the requirement to test all new weapons systems for compliance with international humanitarian law.

Can autonomous weapons use be considered feminist or ethical?

The lack of a comprehensive policy raises some questions about the government’s priorities. There seems to be a disconnect between the slow pace of action on a national autonomous weapons policy and two other government policies — its unofficial feminist foreign policy and the Pan-Canadian Artificial Intelligence Strategy.

Achieving a pre-emptive ban on autonomous weapons is a feminist issue. Remember, artificial intelligence is not neutral — human biases are baked into algorithms, and the data we use to train a machine learning program often reflects our own patriarchal and racist society. Experiences of people of colour and women are often not included in the development of artificial intelligence programs. A recent estimate done by WIRED with Element AI found that only 12 percent of leading machine learning researchers were women. In many cases AI has been found to magnify biases about race and gender.

So what happens when we combine bias in AI with weapons?

In short — scary things. Most obviously, when you have biased AI which can’t identify people of colour, especially dark-skinned women, or misidentifies people of colour involved in targeting decisions without meaningful human control, we’re going to see people who shouldn’t be targeted being targeted. Furthermore, we already see examples of men being targeted during armed conflict based on their gender, age and location, so it stands to reason those errors will be compounded if human judgment is taken out of the targeting process. Canada should be taking steps to ensure that no one develops weapons which will magnify the power imbalances and biases our feminist foreign policy is trying to dismantle.

Autonomous weapons are also a concern from the perspective of the ethical use of AI, something Canada is hoping to promote. In 2017, the government announced a $125 million Pan-Canadian Artificial Intelligence Strategy as part of over $1.3 billion in funding for AI research and development in 2016-2017. The Pan-Canadian AI Strategy aims to strengthen Canada’s economy by “increasing the number of highly-skilled researchers and graduates, enhancing research capabilities and discoveries through collaboration across three centres of excellence, and demonstrating global leadership around the economic, ethical, policy and legal implications around advancement in AI technologies.”

Canadians’ opposition to autonomous weapons is now undeniable.

Autonomous weapons systems are a major concern for the Pan-Canadian AI Strategy in a few ways. First of all, the strategy itself strives to demonstrate global leadership on “ethical, policy and legal implications” around AI. Much of the debate around autonomous weapons systems has focused on their ethical and legal implications. Of the Canadians who were opposed to autonomous weapons in the Ipsos survey, 67 percent indicated their opposition was in part because autonomous weapons “cross a moral line because machines should not be allowed to kill.”

Second, leaders from the three centres of excellence participating in the strategy were among the co-writers of the November 2017 letter to Trudeau asking for national legislation prohibiting autonomous weapons systems and the weaponization of AI.

Third, and possibly most importantly, autonomous weapons pose a serious risk to the public’s trust in AI more broadly. In addition to the Ipsos poll, a 2017 Canadian trust survey by Proof (formerly Environics) found that that only 39 percent of Canadians trust that artificial intelligence will contribute positively to the Canadian economy, and even fewer women believe this to be true (36 percent). Only 25 percent of those surveyed by Proof trusted AI companies to do what is right for Canada, Canadians and our society. These levels of public trust will present a problem for the commercial success of AI in the future, even without images on the news of AI powered autonomous weapons in use. Public trust in the technology is absolutely crucial to the transition from “cool techy thing” to an integral part of our lives. If the technology is weaponized, that transition will be so much harder. The Canadian government has made huge investments in AI — it cannot afford to damage people’s trust in the technology.

Canadians’ opposition to autonomous weapons is now undeniable. Canada has a history of leadership on peace and disarmament, coupled with a strong AI sector which has been quite outspoken on this issue. The Trudeau government should be listening to Canadian experts and to public opinion and begin to develop national legislation to prohibit the development and use of autonomous weapons systems.

A national ban on the use and production of autonomous weapons systems by any Canadian actor and leadership internationally are in line with both a feminist foreign policy and the emphasis the government has put on AI as a future driver of the Canadian economy. The UN talks on the topic resume in March and as the technology is rapidly evolving, it’s time for Canada to get serious about banning autonomous weapons systems.

Before you click away, we’d like to ask you for a favour … 

 

Open Canada is published by the Canadian International Council, but that’s only the beginning of what the CIC does. Through its research and live events hosted by its 18 branches across the country, the CIC is dedicated to engaging Canadians from all walks of life in an ongoing conversation about Canada’s place in the world.

By becoming a member, you’ll be joining a community of Canadians who seek to shape Canada’s role in the world, and you’ll help Open Canada continue to publish thoughtful and provocative reporting and analysis.

Join us