About the Author

Avatar photo

Dr Bronwyn Howell

Consequences of the Christchurch Call: Social engineering by internet platforms?


Print Friendly and PDF
Posted on
By

When seventeen countries, the European Commission, and eight tech companies signed the “Christchurch Call” pledge in May, it was heralded as a triumph in the fight against the promulgation of violent extremist content online. Importantly, the Call, led by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron, was seen as an exemplar of voluntary engagement by the parties concerned, eschewing the somewhat fraught historic path of media censorship by way of legislative fiat.

The signers (including Australia and New Zealand, but not the United States, China, or Russia) committed to a number of measures to “reduce the use of internet services to disseminate violent extremist content.” Notably, tech companies committed (inter alia) to review the content of algorithms driving users to and/or amplifying terrorist and violent extremist content, which may include “using algorithms to redirect users from such content or the promotion of credible, positive alternatives or counter-narratives.”

What has happened since?

Four months on, concrete steps are being taken to deliver the various commitments. Some countries have chosen not to rely on voluntary agreements but have legislated regardless. For example, Australia and Singapore have made it a criminal offense for either individuals or hosting services to share “abhorrent violent material” online. By contrast, New Zealand — under the international spotlight as a consequence of Ardern assuming a leadership role — has taken a more nuanced, if sometimes less than transparent, approach. In the wake of the March 15 attack, the Chief Censor utilized existing powers to declare the gunman’s video and manifesto “objectionable” and make possession of them illegal. However, no new laws governing internet content have emerged from Ardern’s many closed-door meetings with tech company representatives — despite post-meeting social media postings documenting most of them.

In this context, Facebook’s September 17 announcement that it has updated its definition of dangerous individuals and organizations, and would be extending its initiative to use algorithms to redirect individuals using terms associated with searches for white supremacy to resources focused on helping people leave behind hate groups to include Indonesia, Australia, and New Zealand, constitutes a significant development.

Since March, US search queries on Facebook using terms algorithmically determined to be associated with white supremacy have been redirected to Life After Hate, an organization founded by former violent extremists that provides crisis intervention, education, support groups and outreach. From September 17, this arrangement has been extended to users in Indonesia and Australia, who will be redirected to ruangobrol.id and EXIT Australia respectively, operating under a partnership between Facebook and Moonshot CVE. The partnership will build on Moonshot’s data-driven approach to disrupting violent extremism, to develop and refine tracking of these efforts to connect people with information and services intended to “help them leave hate and extremism behind.” Facebook is currently pursuing potential collaborations in New Zealand.

From censorship to social engineering

While there may be considerable popular support for Facebook’s use of algorithms in this manner, there are equally as many questions raised about the legitimacy of attempts to use platforms to “socially re-engineer” or “reprogram” individuals. To be clear, this is not simply a matter of placing a banner ad for the white supremacy re-education organization. It is specifically connecting the requestor with the service. These sorts of involuntary redirections have typically been the purview of government and social agencies, under the aegis of a social contract where the “engineers” are accountable to society as a whole (or to a community of users) separate and distinct from accountabilities to each individual arising from the contractual agreements governing day-to-day transactions.

As the unilateral redirection of an inquiry by Facebook necessitates overriding the individual requestor’s commercial transaction, and hence a “taking” of the expectation that the original request would be fulfilled, one might expect that such actions would only be possible following a transparent, explicit and accountable process agreed by the community of users collectively. For nation-sponsored interventions, this is usually undertaken using established societal constitutionally-endorsed processes. Thus, states can intervene only by use of established legislative, contractual or rule-making processes that typically require a number of steps to ensure that the exercise of the “taking” is widely agreed to by the community affected by it.

This begs the question therefore of where Facebook draws its mandate to intervene to divert requests in different ways for transactions originating from users in different nation states. The different treatments imply that different “social contracts” have been entered into with the separate “communities of users” in each state. If the sovereign governments of these countries have entered into contracts with Facebook to undertake these activities, then questions must be asked of those governments as to where they draw their mandate to intervene. If there are no contracts, then has Facebook itself presumed to act as if it is the state?

Is renegotiating the social contract a step too far?

These questions are not idle musings. Nation states actively participate in socially-motivated interventions in a great many areas. While anti-white supremacy may be the current cause célèbre, what is to stop Facebook (and others) from choosing to respond to other social concerns in specific locations or amongst specific communities, either unilaterally or in collaboration with nation state governments, without necessarily following due consultative processes? How about redirecting smoking-related requests to smoking cessation programs in jurisdictions where this is a pressing concern, or connecting anti-vaccination information requestors to health providers charged with vaccinating the population?

The time is right to have an open debate on these matters. Facebook has moved very far indeed from calls to remove or block access to offending content that arose in the wake of the March 15 massacre. Censorship is one thing; social engineering is another. Enacting the part of the Christchurch Call that actively redirects traffic violates many of the principles of individual sovereignty (within the confines of state laws) that have so far prevailed in internet-mediated transactions. Is this really what various societies and communities actually want? And can social engineering actually succeed if those wanting white supremacist (or other non-approved) content simply switch to services opting not to redirect?

This article was first published HERE.