By Paul Spoonley
We can expect a lot of sound and fury as we start to debate the hate speech provisions that will be aired soon. But hopefully, we can also have an informed debate about the nature of hate, including what occurs online, and the impacts of this on communities, especially those which have been targeted by hate.
A Cabinet paper from last December has signaled that there are proposed changes to the laws concerning hate speech.[i] The aim is to “strengthen the protections against hate speech”. The proposals include a slightly altered definition of what constitutes hate speech, an expansion of the categories to be covered (religious and faith communities, age, disability and LGBQT+ are added to the existing grounds of ethnic/racial and national origin), moving the offences to the Crimes Act (from the Human Rights Act) and providing for a new range of penalties.[ii]
Cue concerns about free speech. Already, a number of groups and political parties are voicing their strong opposition to these changes in the name of preserving free speech.[iii] As the new legislation makes its way through the House in the middle of this year, it will face considerable opposition and concern. And the concerns will be often couched in unambiguous and oppositional terms. As The Detail on Radio NZ said by way of introduction to a recent discussion on these changes, “…any significant change around hate speech involves censoring free speech – the cornerstone of liberal democracy”.[iv]
Does it? Again, cue a number of phrases that will be invoked: cancel culture, de-platforming, political correctness, state censorship, or just simply censorship, or an attack on democratic values. This is accompanied by claims that offensive speech, including speech which is not intended to be offensive, will be criminalised. Some have argued that punishing someone, such as Brian Tamaki, under a “putative toughened hate-speech law would not only fail to make him reconsider, but would also earn him more followers”.[v]
In reality, speech is already constrained in a variety of ways in New Zealand. Apart from defamation and libel laws, there are range of acts and bodies that either monitor what can – and cannot – be said or provide an opportunity for mediation or complaint. The New Zealand Bill of Rights (1990) affirms free speech but operates alongside the guidance and restrictions imposed by the Human Rights Act (1993), the Harmful Digital Communications Act (2015), the Broadcasting Act (1989) and the Films, Videos and Publications Act (1993). Then there is the Broadcasting Standards Authority, Netsafe, the Human Rights Commission, the New Zealand Media Council and the Office of Film and Literature Classification. And the Sentencing Act allows for hostility (“that the offender committed the offence partly or wholly because of hostility towards a group of persons who have an enduring common characteristic such as race, colour, nationality, religion, gender identity, sexual orientation, age, or disability”)[vi] to be considered as an aggravating factor, although this provision is seldom used.
I am sure that I have missed some but you get the point. There are acts and bodies that already constrain free speech. So why do we need to consider further legislation?
One answer is that the existing legislation is out-of-date, especially the Human Rights Act in relation to the prohibited characteristics that are grounds for a complaint. The second is that the context in which “speech” occurs has changed. The internet has altered the context, tone and reach of speech. The internet has enormous upsides but what has also become clear is that there is more opportunity to offend and cause harm. This is graphically rehearsed in the Royal Commission report into the Christchurch massacre; the awful events and the Royal Commission’s findings have hurried up the need to discuss what can and should be done, including the issue of hate speech.[vii] So we are back to the Cabinet paper and the proposals to change our hate speech laws.
There are certainly aspects of the legislation which will require close scrutiny. There is always the issue of how to define hate speech. There are some countries that have functioning hate speech definitions that deserve some attention. For example, Canada is worth considering. In general, hate speech is there defined in 1985 legislation as (a) an incitement to genocide; (b) the public incitement of hatred; or (c) the willful promotion of hatred. The legislation helpfully provides a number of defences or exemptions.[viii] A Canadian Supreme Court ruling clarifies that harm needs to be established; it is not enough that the views are “repugnant and offensive”.[ix] And the legislation is only invoked in the case of “extreme manifestations”.
Then there is the threshold that needs to be breached before something can even be labelled as “hate speech’. That threshold was tested in relation to the existing legislation by Wall v. Fairfax where the High Court ruled that the cartoons in question were insulting but that the requirement to show that they would “excite hostility and contempt” was not met. The “severity threshold” is one element that exists to preserve robust speech at the same time that it needs to determine what constitutes unacceptable speech and which is harmful.
Another key matter is who makes the decision, not only about the penalties but what should be ruled to be hate speech. Given that it is proposed that the matter be moved to the Crimes Act, then it would be a decision for the courts. I am in two minds about this. I have sufficient faith in our judicial system that they will review the evidence and come to a decision based on that evidence and the guiding legislation. Equally, criminalising hate speech without alternative, non-judicial approaches seems rather heavy-handed.
Next, we come to the vexed questions raised by online vilification and hate. Organisations that monitor online hate such as the Anti-Defamation League (ADL)[x] and the Southern Poverty Law Center[xi] have noted the escalation in extremist and hate material since 2015-16. At one point, the ADL reported that the spike in antisemitic material in 2017 was the highest on record with new expressions, mostly online, occurring every 83 seconds. And the rise – and rise – of Islamophobic material has been a feature of the internet over the last two decades, again having increased significantly since 2015-16; we lack good data for New Zealand but note the UK evidence provided by Tell MAMA – and while you are on their website, have a look at their report on the Christchurch massacre.[xii] COVID has just added to the volume and the abuse that is occurring.
In 2018, the Australian and New Zealand authorities with responsibility for monitoring what occurs online met in Auckland and their first statement (of four) was that online abuse and harassment “have a wider impact than previously understood”.[xiii] There is now good evidence to show that there is a direct relationship between online hate and real-world consequences, such as attacks on certain individuals or groups because of their ethnicity or faith. The research by Karsten Müller and Carlo Schwarz (“Hashtag to hate crime”)[xiv] shows how the connections between online material and hate crime. It is critical that this relationship is understood in a New Zealand context so that policy and other relevant agencies, including the courts, are informed about causality and what triggers hate, both online and in translation to everyday worlds.
An understanding of the online world of hate, what prompts it, who participates and the translation to real-world action, are among the most challenging in relation to hate speech. The Christchurch Call is an important but modest attempt to understand and regulate what is now an international ecosystem. The new legislation, and hopefully a range of non-legal initiatives, will seek to address this particularly problematic aspect of contemporary extremist ideologies and politics.
Others are looking to do something similar. This month, the US House passed the COVID-19 Hate Crimes Act in response to a spike in hate directed towards Asian Americans.[xv] The Online Harms White Paper in the UK (December 2020) has led to the introduction of the Online Safety Bill which is intended to address online abuse, including racist hate, as a way of moderating such hate and yet preserving democracy in a digital age.[xvi]
We learnt a very painful lesson in 2019. New Zealand was not an exception when it came to white supremacist-inspired hate and terrorism. We have had one prosecution for hate speech – in 1977 -when two members of the National Socialist Party were convicted in the Auckland magistrates court for antisemitic material under s25 of the Race Relations Act (1971).[xvii] The provisions were again tested by the Wall v Fairfax case where the High Court ruled that while the cartoon in question was “objectively insulting”, it did not breach the requirement that it would “excite hostility and contempt”.[xviii]
We can expect a lot of sound and fury as we start to debate the hate speech provisions that will be aired soon. But hopefully, we can also have an informed debate about the nature of hate, including what occurs online, and the impacts of this on communities, especially those which have been targeted by hate. We need to know, following the Royal Commission’s insights, what radicalises members of our community and what can be done to both minimise the possibility of such radicalisation as well as how to de-programme those who have been recruited to extreme political views. We need some broad agreement about how to protect freedom of speech as well as what falls outside the boundaries of acceptability and which will cause harm. And we need to agree on who adjudicates on such matters. Above, all we need to declare that such speech undermines social cohesion and the mana and safety of those groups on the receiving end of such hate.
[v] Editorial, “Uncommon decency”, Listener, 20 March 2021.
Distinguished Professor Paul Spoonley is an Honorary Research Associate of the College of Humanities and Social Sciences at Massey University. He has been researching and writing about the radical right since the 1970s.
Disclaimer: The views expressed in this article reflect the author’s opinion and not necessarily the views of The Big Q.