The May 15 Paris summit led by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron seeking to eliminate terrorist and violent extremist content online has concluded with the promulgation of a pledge — the “Christchurch call” — signed by representatives of 17 countries, the European Commission, and eight tech companies (Amazon, Daily Motion, Facebook, Google, Microsoft, Qwant, Twitter, and YouTube). Conspicuous by their absence are the United States, China, Russia, and technology-leading countries such as South Korea and Finland. And while the major US tech companies were represented, it was generally by second-tier management rather than their high-profile founder leaders.
Delegates gather during a “Tech For Good” Summit to launch anti-terror “Christchurch Call” in Paris, May 15, 2019 – via REUTERS
What was pledged?
Governments committed to five objectives:
- Counter the drivers of terrorism and violent extremism by strengthening the resilience and inclusiveness of their societies to enable them to resist terrorist and violent extremist ideologies, including through education, building media literacy to help counter distorted terrorist and violent extremist narratives, and the fight against inequality (emphasis added).
- Ensure effective enforcement of applicable laws that prohibit production or dissemination of terrorist and violent extremist content.
- Encourage media outlets to apply ethical standards when depicting terrorist events online, to avoid amplifying terrorist and violent extremist content.
- Support frameworks, such as industry standards, to ensure that reporting on terrorist attacks does not amplify terrorist and violent extremist content, without prejudice to responsible coverage of terrorism and violent extremism.
- Consider appropriate action to prevent the use of online services to disseminate terrorist and violent extremist content . . . [by] awareness-raising and capacity-building activities . . . development of industry or voluntary frameworks . . . regulatory or policy measures consistent with a free, open and secure internet and international human rights laws.
For their part, the tech companies committed to:
- Take transparent . . . measures seeking to prevent the upload of terrorist and violent and extremist content and to prevent its dissemination.
- Provide greater transparency in the setting of community standards or terms of service, including by outlining and publishing the consequence of sharing terrorist and violent extremist content [and] describing policies . . . for detecting and removing [it].
- Enforce those standards . . . by prioritizing moderation . . . closing accounts where appropriate . . . [and] providing an efficient complaints and appeals process.
- Implement immediate, effective measures to mitigate the specific risk that terrorist and violent extremist content is disseminated through livestreaming.
- Implement regular and transparent reporting.
- Review the operation of algorithms . . . that may drive users towards and/or amplify terrorist and violent extremist content. . . . This may include using algorithms and other processes to redirect users from such content or the promotion of credible, positive alternatives or counter-narratives (emphasis added).
- Work together to ensure cross-industry efforts are coordinated and robust . . . by sharing knowledge and expertise.
Notably, the day before the summit, Facebook announced that users sharing “violating content” such as a statement from a terrorist group without context would be blocked from using the platform for a set period (for example, 30 days). This would include both advertisers and general users.
Together, governments and the tech companies committed to working collectively to:
- Work with civil society to promote community-led efforts to counter violent extremism . . . including through the development and promotion of positive alternatives and counter-messaging (emphasis added).
- Develop effective interventions . . . to redirect users from terrorist and violent extremist content (emphasis added).
- Accelerate research into the development of technical solutions to prevent the upload of and to detect and immediately remove terrorist and violent extremist content online, and share these solutions through open channels.
- Support research and academic efforts to better understand, prevent and counter terrorist and violent extremist content online.
- Ensure appropriate cooperation with and among law enforcement.
To these were added eight further initiatives to collaborate, reinforce, and expand the range, effect, and delivery of the pledge.
At first glance the pledge appears, as intended, a positive example of collaborative negotiation toward a self-governing regime. For the most part tech companies have, if not avoided, then at least apparently delayed explicit regulation. (Although the harsh criminal penalties for breaches of similar obligations imposed in Australia’s recent legislation draw into question that signatory’s commitment to “ensure,” “encourage,” “support,” and “consider” when it has already explicitly regulated.)
A deeper examination, however, leads to a more worrying conclusion. While governments have agreed to “ensure,” “encourage,” “support,” and “consider” a range of difficult-to-enforce aspirational goals, the tech companies have agreed to take a number of concrete, observable, and measurable steps on which it will be much easier to hold them explicitly accountable. In the bargaining of the summit, they have agreed in effect to act as the agents of the governments in delivering their political objectives of countering “distorted terrorist and violent extremist narratives” and engaging in “the fight against inequality.”
Rather than simply removing offending content, as they might be required to do for pornographic or addictive content, they have been recruited to promote community-led efforts to counter violent extremism through the “development and promotion of positive alternatives and counter-messaging” and to “redirect users from terrorist and violent extremist content,” that is, to develop and distribute government-sanctioned propaganda. This is further reinforced by the tech firm-specific undertaking to use “algorithms to redirect users from such content or the promotion of credible, positive alternatives or counter-narratives.”
While on the one hand it might appear laudable to replace negative content with positive, on the other it invokes shadows of past government conscriptions of the mass media of the day to manage the messages citizens receive to align with those that the powers that be wish them to hear (or not hear, as the case may be). These include, for example, the endeavors of the British establishment in 1854 to prevent newspaper publication of the acknowledged first-ever war correspondent William Howard Russell’s reports from the Crimean War front and the obligation on the French press during the Franco-Prussian War in 1870 to obtain accreditation from the official “advertisement office” (bureau de publicité) before publishing war correspondents’ material. (Arguably, publication of material provided by the government’s own media agents, the Army Monitor and the Navy Monitor, was much preferred.) Similar constraints imposed by governments of all flavors governed media coverage of virtually all wars until Vietnam. (Who can’t recall the iconic Pulitzer Prize–winning photograph of the Napalm girl from that war and its effect on public opinion?)
Ironically, it was widespread distribution of shocking stories and photographs of actual atrocities that led to real and immediate changes in social attitudes and the prevailing value of pacifism witnessed nowadays. While there was temporary anger at the media carrying the messages, this generally did not last. However, the consequences for governments seeking to manage the message were not always so good. As The Times writer Robbie Millen said, “Russell and his editor, John Delane, were heavily criticized by the government, and Queen Victoria, for the Crimean dispatches. But, pun only slightly intended, they stuck to their guns. Result: the government fell and The Times did not.”
It behooves both the governments and the tech companies engaged in the Christchurch call pledge to demonstrate that their agreement is not just another exertion of government control over the freedom of the press (and other publishers) to prevent citizens from seeing the world in all its ugly (and sometimes distressing) reality by directing them instead to a preferred sanitized message. The current wording, unfortunately, provides no such assurance.
This article was first published by the American Enterprise Institute HERE.