Where do we draw the line on hate speech?

In 2009, I was encouraged by some friends at work to join a new social media platform called Twitter. I remember watching a short promo video and hearing about how this site allowed people all across the world to connect and speak freely about whatever came to mind — whether about our favorite sport teams or the most important social issues of the day. But as the platform grew in users and influence in the public square, real challenges emerged about how to navigate violence, misinformation, and even hate speech online. And as a long history of U.S. jurisprudence illustrates, hate speech has been notoriously difficult to define, often due to inadequate parameters and the robust protections for free expression and religious freedom from heavy-handed government overreach.

While these problems are not limited to Twitter specifically, the type of users the platform attracts and its enormous influence in public discourse have made it ground zero for many of the debates over free expression and content moderation. Last year, two prominent conservative pundits, Allie Beth Stuckey and Erick Erickson, were both temporarily suspended by Twitter for violating the platform’s rules on hateful conduct, specifically concerning gender and gender identity issues. Both users had access to their accounts limited for 12 hours, being unable to post new messages, like posts, or retweet other accounts. 

Transgenderism and hateful conduct

Stuckey and Erickson both tweeted about the first openly transgender athlete in history to compete in the Olympic games. Laurel Hubbard, who was born as a man, recently represented New Zealand in the women’s weightlifting competition in Tokyo. Both Stuckey and Erickson were suspended for tweeting that Hubbard was still a man and that even though Hubbard fell short in the competition, it was not fair for the athlete, who is a biological male, to compete against women during the games.

Neither of the tweets advocated for physical violence, attacked, or threatened Hubbard in any way. Yet, both users were suspended for violating a hateful conduct policy that defines hate speech in the broadest of terms. Twitter defines hateful conduct in their content moderation policies by stating,

“You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.”

The company goes on to say, “We are committed to combating abuse motivated by hatred, prejudice or intolerance, particularly abuse that seeks to silence the voices of those who have been historically marginalized. For this reason, we prohibit behavior that targets individuals with abuse based on protected categories.” But if you dig deeper into their policies, it becomes clear that the company has an incredibly broad understanding of what constitutes hateful conduct, which can easily extend to any type of speech that one simply does not like or makes a user feel uncomfortable.

Defining hate speech

While many technology companies refer to international norms on defining controversial topics — including the nature of human rights — it should be noted that hate speech is often left undefined in legal terms because of the deep tension that exists between hate speech and free expression. The U.N.’s own plan of action on hate speech from May 2019 makes this clear by saying, “There is no international legal definition of hate speech, and the characterization of what is ‘hateful’ is controversial and disputed.” While the UN leaves hate speech undefined, it clearly desires robust protections against hate speech and calls it “a menace to democratic values, social stability and peace” that “must confront[ed] . . . at every turn.

Similarly in the United States, there is no legal definition of hate speech in U.S. law as the Supreme Court has routinely affirmed that hate speech is protected by the First Amendment. A recent example is the case of Snyder vs Phelps concerning hate speech and Westboro Baptist Church. According to the American Library Association, “under current First Amendment jurisprudence, hate speech can only be criminalized when it directly incites imminent criminal activity or consists of specific threats of violence targeted against a person or group” (emphasis mine). 

Defining hate speech is a perennially difficult issue throughout society, especially with the rise of online speech through social media platforms. There are constant ongoing debates in society and the academy over what actually constitutes hate speech and if the label should simply be limited to speech that incites or instigates physical violence or harm. In the case on Twitter, the company has drawn a clear line by defining hate speech broadly, a definition which necessarily infringes on free expression and religious freedom concerning some of the most contentious issues of our day — namely human sexuality and marriage. 

Most people would tend to agree that the initial categories laid out by Twitter such as threats of physical violence, “wishing, hoping or calling for serious harm on a person or group of people,” and “references to mass murder, violent events, or specific means of violence where protected groups have been the primary targets or victims,” fall under good faith content moderation and should be championed by all. Christians, in particular, should affirm many of these guidelines because of our belief in the innate value and dignity of all people as created in God’s image and the freedom of conscience that flows from our understanding of the imago Dei (Gen. 1:26-28). But when hate speech is broadened to include speech that makes one feel uncomfortable or that one simply does not like, we have set a dangerous precedent for public discourse.

Free expression and public discourse

Twitter claims in their content moderation policies: “Free expression is a human right – we believe that everyone has a voice, and the right to use it. Our role is to serve the public conversation, which requires representation of a diverse range of perspectives.” But this lofty goal of free expression is actually stifled and in many ways completely mitigated by promoting some speech at the expense of other speech deemed unworthy for public discourse, even if that speech aligns with scientific realities which are taught and affirmed by millions of people throughout the world — including, but not limited to, people of faith.

As I wrote in response to a similar situation over transgender ideology and free expression, civil and nonviolent disagreements over the biological differences between a man and woman simply do not and cannot — especially for the sake of robust public discourse — be equated with hate speech or hateful conduct. And any attempt to create and enforce these types of broadly defined policies continues to break down the trust that the public has in these companies and violates the immense responsibility they have over providing avenues for public discourse and free expression.

In a time where there is already a considerable amount of distrust in institutions, governments, and even technology companies themselves, ill-defined and broad policies that seem to equate historic and orthodox beliefs on marriage and sexuality with the dehumanizing nature of real hate speech and violence only widens the deficit of trust and increases skepticism over the true intention behind these policies.

Building off of the legal boundaries of defining hate speech, our society must be able to have healthy dialogue about these contentious issues. The best way to do that is to champion free expression and religious freedom for all, not just those with whom we agree or even like. Free expression does not mean that we all must agree on these particular issues, but it does mean that everyone is able to speak their opinion freely and without fear of being cut off by those who oversee these platforms.

Whatever you may think of Stuckey or Erickson’s beliefs, we should all be able to agree that these broadly defined hateful conduct policies are dangerous to free expression and our public discourse. We need more, not less dialogue and engagement on these contentious issues. These issues will not simply pass away because God’s design for human sexuality is central to the church and society. These content moderation policies must be amended to actually stand for the free expression for all people, not just those with whom a company or even our society may agree.