“It has never been easier to get engaged than it is today,” Canada’s minister of democratic institutions, Karina Gould, told an afternoon crowd at the Chateau Laurier in Ottawa in mid-October. “Digital technologies empower us by offering multiple ways to connect,” she said.
She then offered a caveat: “The same digital technologies can be used toward some not so good ends, such as the spread of misinformation online, injected into the public discussion by those who masquerade as legitimate media sources or individuals.”
It was a succinct overview of the biggest public policy issue facing most democratic nations on the planet: How can policymakers promote legitimate engagement on the mega-platforms — Facebook, Twitter, and Google — while equally ensuring the institutions citizens rely on (government, elections, etc.) remain free from manipulation? It is a quandary complicated further by the fact that those same governments and public institutions are intertwined with the tech companies that now appear to need to be controlled.
Governments have stepped in to regulate digital activities before. In Europe and North America, they have pushed to enact real-world consequences for online actions; in particular making arrests for inciting hate speech or exchanging child pornography. They have also waded into privacy debates, including in 2014, when the European Union ruled that people have the “right to be forgotten” online. And governments have been active in trying to eliminate terrorism propaganda or recruitment videos from online platforms like YouTube — an initiative given new life recently as the United Kingdom and France have agreed to pressure tech companies to remove “extremist material” within two hours of its posting.
Yet, when it comes to discourse that may be inaccurate but technically legal — fake news, in other words — governments are in a tougher spot. How can those crafting public policy try to promote truthfulness in public discourse? Addressing that question means looking beyond one single election, and rather at how democracy operates in an era of big tech. But in this new context, what can policymakers do about controlling what happens online?
The threat some aspects of the internet currently poses for governments is not to be underestimated. In a report released in June, Canada’s Communications Security Establishment (CSE) summarized it this way: “Adversaries worldwide use cyber capabilities to target all three aspects of the democratic process (i.e. elections, political parties and politicians, and traditional and social media).”
By the time the CSE made that assessment, the idea that a foreign adversary might use social media in an attempt to disrupt the democratic process in Canada was already a familiar one. The year has been dominated by news that the 2016 US presidential election may have been manipulated by misinformation planted by foreign operatives — most likely Russian — that was subsequently shared widely on social media, and in particular on Facebook, the world’s largest social media platform.
Addressing this challenge from a public policy perspective is not without precedent, but the difficult problems that governments face when controlling other discourse online, such as that related to terrorism, are only magnified when it comes to misinformation.
In June, the Open Rights Group, which advocates for privacy and free speech online, questioned the joint venture by the UK and France to force tech companies under threat of fine to clamp down on terrorist material. On the Open Rights blog, Ed Johnson-Williams suggested corporate reputations would lead the companies to adopt “hair-trigger, error-prone algorithms” that will wind up removing “unobjectionable content.”
Indeed it is a challenge, says Elizabeth Bodine-Baron, an engineer and co-director at the RAND Center for Applied Network Analysis and System Science. When it comes to potentially applying that same technique of algorithmic trawling to the new threat of misinformation or so-called “fake news,” the problem of false-positives is compounded.
“The fact is, when you’re looking at something broad like fake news or something like that, it can be on any subject, it’s no longer just this single ‘promoting terrorist thought or ideology’, which is difficult enough in and of itself,” says Bodine-Baron. “You’re talking about applying it to a vast space of different topics and there’s no single marker for, ‘this is fake.’”
And in these cases, the final determiner of what is objectionable may not necessarily be a government or public policymaker, but instead the tech platform hosting it — pulling private tech companies even closer to the realm of politics than they already are.
In September, Facebook CEO Mark Zuckerberg announced the social media platform will soon allow users to see who paid for the ads that appear in their timelines, and also be able to see what other messages that advertiser is promoting elsewhere.
The move followed
the revelation that Russian operatives had purchased at least US$100,000 worth
of advertising on the platform during the US election cycle. It also comes on
the heels of Facebook announcing in April and September that it had shut down
tens of thousands of fake accounts on its network in both France and Germany, respectively, prior to national elections. Fake accounts are
thought to amplify fake news and misinformation via fraudulent ‘likes’ and
shares, thereby causing those posts to go viral.
But tech companies are becoming mixed up in politics in other, more confusing ways, too. In 2016, for instance, Facebook even provided political advertisers with a demographic breakdown of the US electorate, according to documents revealed by BuzzFeed. Such a document could have two possible uses: to target legitimate political ads, or to sow discord based on preferences and values identified in the document.
Writing at BuzzFeed in October, Daniel Kriss of the University of North Carolina at Chapel Hill and Shannon McGregor of the University of Utah examined the role that tech employees also play whilst embedded within political campaigns. Facebook, Twitter, and Google all “play active roles in electoral politics,” they wrote.
All three companies offer employees’ services to campaigns to advise them “on everything from the content of their ads and other communications to the specific groups they might benefit from targeting, and how best to reach them,” Kriss and McGregor wrote, based on their research and interviews. And the success of those relationships is later used for corporate branding. Facebook, for instance, actively promotes its role in electoral wins as examples of how it can effectively reach audiences, as it did following the UK Conservative party’s majority victory in 2015.
Thus, while Facebook, Twitter, or Google monitor and crack down on misinformation across their networks, and attempt to ensure search results lead to valid sources, in the frenetic environment of an election, they may be occasionally helping political campaigns create advertisements and messaging that could, if re-contextualized and shared by someone else, fuel that very same misinformation.
Political ads are just the start. While ads registered by political parties may be controlled, the more pressing issue may simply be how much attention ads receives, and how much of that the platforms are controlling.
While registered
political parties or activist groups may openly declare themselves as such, and
who are bound by domestic law to identify the messages they place on social
media, many others will remain in the shadows, disseminating political-like messages
or simply encouraging divisive language via groups focused on social issues.
There is reported evidence to suggest Russian operatives did just that during
the run-up to the 2016 US presidential vote, creating and running a Black
activist group that, according to CNN, “regularly shared content intended to stoke outrage.” Two of
those groups, unbeknownst to their members who showed up, even staged competing
rallies in Texas.
In the United States, there is now talk of a heavy-handed government approach: to treat the major tech companies like any other communications corporation, and regulate it the same way television or radio were in the last century. “There are growing efforts in Washington to do something about big tech,” Farhad Manjoo reported at The New York Times. But the big five companies — Apple, Amazon, Facebook, Google, and Microsoft — are so big, and are so deeply enmeshed within so many facets of life and the economy and the government itself, that something akin to trust-busting may prove impossible, even if any politician were willing to do it. And that sort of regulation would be against the platforms, not against users.
Speaking to the challenges of social media in October, Gould told the Ottawa crowd that “when it comes to foreign influence, there are tough laws in place today that apply to third party spending and election activity,” and that “it is illegal for foreign entities to try and influence the outcome of an election.” It is illegal in the US, too.
Third parties in Canada do indeed have to make their election communications known to Elections Canada, or else face penalties. Elections Canada considers online messages advertising if they promote a registered candidate or party, or take a position “on an issue with which a registered party or candidate is associated,” and “have, or would normally have, a placement cost.” However, other messages (like personal political views) don’t fall into this category. That includes most of what’s on social media. Elections Canada currently explicitly states in its rules for third parties that “messages sent or posted for free on social media platforms such as Twitter or Facebook” do not count as ads, and thus are exempt from the current third party advertising regulations. That includes group pages that have been created by third parties.
Improving parts of Canada’s laws to cope with new technology could be done, says Elizabeth Dubois, an assistant professor at the University of Ottawa whose research focuses on digital political influencers. One step she suggests Ottawa could take is to change the voter contact registry rules adopted following the 2011 robocalls election scandal, in which some voters received calls providing misinformation about their polling station. The rules could be “expanded to consider something like bot-networks being established to send information out very widely to Canadians to try to suppress the vote or give people incorrect information,” Dubois says. Requiring registered parties or groups to reveal if they were using a bot network to amplify any messaging online would remove a grey zone, Dubois says, and “potentially point to additional actors” — that is, those accounts that aren’t registered and thus operating beyond the rules. But even that, she says, can’t be thought of as a response to misinformation itself, but merely as a way to address how automated messages are generated to amplify it.
When Gould made her assessment about engagement and the power of digital tech, she was speaking at the launch of the Canadian Election Integrity Initiative. The initiative includes a “cyber hygiene guide” and training program for politicians to promote security, as well as a “cyber threats crisis email line” for people to use in the event of a suspected hack. The initiative has been billed as a direct response to the Communications Security Establishment’s report released in the summer which concluded, among other things, that Canadian politicians, political parties, and media “are more vulnerable to cyber threats and related influence operations than the election activities themselves.”
It is a Facebook-led and branded program.
For now, Gould, like many of her peers in governments around the world, is placing the onus for regulation on tech companies, rather than offering to make any preemptive legislative moves. “I think it is important for social media platforms to think critically about their current practices and how they can create spaces for informed public dialogue and the information we consume,” she said.
Though new electoral legislation is expected before the end of this year, it will likely deal most with addressing recommendations Canada’s former Chief Electoral Office Marc Mayrand made following the 2015 election, and less directly with social media’s potential influence on future ones. There is no publicly available evidence to suggest Canada’s 2015 election was influenced in the same way the presidential election a year later was potentially altered by tech platforms and third-party actors. Ironically, if there had been, says one source inside the government with knowledge on the matter, it might have been easier to craft new legislation particular to Canada.
“Maybe if something had happened, we could have a better sense. Even if we can look around the world and know what’s going on, it’s hard to find people who have got a good knowledge on this stuff and help us predict a little bit about where we need to be,” the government source said, speaking on background in September. “It’s been difficult, it’s been very difficult.” And even if legislation does come? “I don’t think anybody at the end of the day [is] going to feel completely satisfied.”