Ardern’s Christchurch Call is a new way to monitor and censor speech

What makes this undertaking so complicated is that the initial draft of the Christchurch Call contained no definition of “violent, extremist content.” How can content be regulated if there is no consensus on what it is?

ADVERTISEMENT
Image
Libby Emmons Brooklyn NY
ADVERTISEMENT

Prime Minister Jacinda Ardern of New Zealand has teamed up with French President Emmanuel Macron to combat the proliferation of violent content online. Their agreement, what Ardern has named the Christchurch Call, is in response to the massacre in Christchurch New Zealand on March 15 of this year. The meeting is a means to get tech companies and international governments on board with deterring or removing expressions of extremism on social media platforms.

The massacre was carried out and live streamed. This visual violence was perpetrated on Facebook, and despite attempts to pull the video, it was shared over 1.5 million times on the platform. New Zealand police took to arresting those who were sharing the video, and still the video proliferated. People logging on to the platform saw the video without intending to see it, because the default auto-play settings were enabled.

Calls to mental health hot-lines in New Zealand, after the massacre, and after seeing the video, substantially increased in the days after the attack. Both the massacre and the video scarred the nation, and Ardern is right to be concerned for her public. The legislature banned the kind of weapons that were used to carry about the attack, and Ardern wants to ban whatever it is that will stop the proliferation of extremist content.

Ardern took to Facebook, streaming a video of her own to explain the Christchurch Call, and what she hopes it will achieve.

I’m on my way to a gathering of world leaders, and tech companies, and civil society in Paris to promote something called the Christchurch Call. Now, I don’t need to tell you that here in New Zealand, of course, we were left reeling after the 15th of March terrorist attack. We’re not the first country to experience an attack like that, but what happened in Christchurch was unique in one particular way. This was a terrorist attack that was designed to go viral… We’re left therefore, with I think, a sense of responsibility, a duty of care, to try not just to prevent terrorist attacks like this ever happening on our soil again, but to also try and prevent sharing of terrorist content, of extreme violent content online. That’s what the Christchurch Call is all about.”Now we have a couple of principles and starting points that are really important to New Zealanders, that we must maintain a free, and open, and accessible internet. That has to be protected. That’s why what we’re asking, tech companies and governments to come together, is actually something we can all agree on: stopping extreme violent content and terrorist content online… social media companies, these platforms, they’re global, and so the response needs to be global. And so secondly, if we truly want to prevent the proliferation of this kind of content, we need tech companies on board, and we need technological solutions to prevent the proliferation in the first place.

That’s why the Christchurch Call will involve tech companies and government around the table. It is a specific set of actions, both the government, and that we’re asking tech companies to implement, themselves… We have the reluctant duty of care, a responsibility that we’ve now found ourselves holding, and that’s what we will be taking forward, this week, in Paris.

European democracies have great latitude in controlling social media, and protecting themselves against international attack. That makes their job much different from the job of the United States government. The US can’t sign on to the Christchurch Call because there is simply no legal way for the federal government to restrict speech.

Several countries, “including Britain, Canada, Jordan, Senegal, Indonesia, Australia, Norway and Ireland,” have agreed to sign onto the accord, but how each country will then determine how to undertake these prohibitions will be up to each legislature to sort on its own. The same goes for the tech companies. What makes this undertaking so complicated is that the initial draft of the Christchurch Call contained no definition of “violent, extremist content.”

How can content be regulated if there is no consensus on what it is? What does it look like? Certainly the horrific video of the March 15th massacre falls under the heading of “stuff no one should share on social media,” but what about a satire about jihad that is shot similarly? There has been so much disagreement of late about just what content is harmful, whether or not speech is violence, who should get banned from social media.

This is a question that continues to pop up, no matter what genre of content is under scrutiny. That hateful content proliferates online and across social media platforms is not in question, but how to regulate it, and what standards should be in place for that regulation, are questions that have not been answered.

Social media companies cannot hire enough people to monitor content in real time, so they rely on user reporting. User reporting contains within it biases and ill-intent, as well as users simply disagreeing about what constitutes an offense. Social media companies are working toward a model where AI can pick up most of the slack. But every time they do, it’s a failure. They end up with balloons and confetti, thanking jihadis for spending another year on Facebook.

AI and machine learning algorithms look like a panacea, but they are not. These algorithms are made by human beings, who are trying their best, and don’t know what the results will be. If leaders cannot proffer a definition of the extremist content enough to actually enshrine it in the Christchurch Call, how will programmers be able to give defined parameters to an algorithm so that it will effectively sort user data, not make any mistakes, and still restrict violent content?

Instead of teaming up with tech companies to prohibit speech, harmful or otherwise, or to encourage social media platforms to adjust their algorithms in the direction of showing less content, Ardern should be calling for greater transparency. Organizations like the Foundation for Responsible Robotics and Open AI have been urging transparency in the face of the extension of robotics and AI into so many varied facets of our lives, and calling for ethical considerations to trump economic ones.

When consumers of AI and social media know how the products are being driven, they can make their own decisions, and not have governments make determinations on their behalf. Ardern wants desperately to protect her people from disturbing images of massacres, but the hardest part here is that she can’t. Individuals, global citizens, will need the tools to protect themselves, if that is what they choose to do.

ADVERTISEMENT
ADVERTISEMENT

Join and support independent free thinkers!

We’re independent and can’t be cancelled. The establishment media is increasingly dedicated to divisive cancel culture, corporate wokeism, and political correctness, all while covering up corruption from the corridors of power. The need for fact-based journalism and thoughtful analysis has never been greater. When you support The Post Millennial, you support freedom of the press at a time when it's under direct attack. Join the ranks of independent, free thinkers by supporting us today for as little as $1.

Support The Post Millennial

Remind me next month

To find out what personal data we collect and how we use it, please visit our Privacy Policy

ADVERTISEMENT
ADVERTISEMENT
By signing up you agree to our Terms of Use and Privacy Policy
ADVERTISEMENT
© 2024 The Post Millennial, Privacy Policy | Do Not Sell My Personal Information