Photo by George Pagan III on Unsplash

Social media and defamation

What libel law means online — and what remedies exist

BY RILEY SPARKS

Harassment, abuse and defamation are part of the online landscape, said Mel Woods, audience engagement editor at Xtra Magazine. 

With much of the harassment they see online, their approach is to ignore it as much as possible. “It’s such a cliché being like, ‘Don’t give them what they want,’ but it’s the biggest piece of advice that I give to people,” Woods said. 

Withdrawing entirely isn’t the solution, they note — they and the publication as a whole still want to be on social media, where their reporting can provide an opportunity for good-faith discussions and for people to ask questions and talk about issues that matter to them.

This guide is financially supported by The Law Foundation of Ontario. The Canada Press Freedom Project is solely responsible for all content.

With the exception of Twitter (X) — which the publication finally abandoned in 2024 after the site became a safe haven for everything including disinformation and extremism — they’ve been able to cultivate a largely supportive community on other social media platforms — notably TikTok, where the publication has just under 13,000 followers, as well as BlueSky. Xtra’s focus — 2SLGBTQ+ stories — often attracts some of the most toxic conversations online, but Woods says that so far, with some exceptions, the publication’s positive social media community has helped to keep misinformation and other harmful content at bay and allows for genuinely thoughtful discussions and arguments. 

“If we have a video about book bans in Alberta and somebody says, ‘Kids shouldn’t be looking at porn in comic books,’ a bunch of people will be in the comments being like, ‘That’s not what’s happening, bro.’ And that’s healthy,” Woods said.

But misinformation and day-to-day abuse are just one part of the online discourse, they noted. “It’s kind of an iceberg,” Woods said. Above the surface: comments on a publication’s social media, or posts which directly name or tag journalists — and below, often much larger and more toxic conversations happening on private or less accessible parts of the internet, or in places the people named don’t often visit. 

“If you’re not on those platforms or you’re not looking for it, you might not even realize the level of … defamation or disinformation or bad actors spreading ungenerous readings of your content in places that you just never even go,” they said. 

Not all online hostility amounts to defamation, however. Canadian defamation law focuses on false statements of fact that harm a person’s reputation — not every insult, pile-on or abusive interaction.

And then there are more high-profile, targeted attacks. In September, activists in the U.S. organized a doxxing and harassment campaign targeting people whom they claimed had “celebrated” the killing of far-right activist Charlie Kirk. The website, initially labelled “Charlie’s Murderers,” listed at least one Canadian journalist, and was widely described as defamatory. The campaign had widespread real-life consequences for many of those named, including firings and a flood of targeted harassment.

In a similar vein, HonestReporting Canada, a pro-Israel lobby group, organizes email and advocacy campaigns that journalists and media organizations have described as harassment. 

The organization’s campaigns regularly direct hundreds or even thousands of form-letter emails through what the organization calls “action alerts” to individual media workers — and in many cases, directly to their editors or employers, which is what happened when the organization went after Woods. 

HonestReporting alerts attack the reputation of the media workers and publications named, often labelling them or their work as antisemitic or supportive of militant groups or terrorism. Journalists who have spoken with the Canada Press Freedom Project about the experience of being named in these alerts have noted that the accusations often linger in search results online for years. 

Woods said they believe the aim of those campaigns is to pressure employers and editors, including by prompting complaints about individual journalists. Being targeted by the organization hasn’t had any professional consequences for Woods — but journalists at other publications have described facing pressure from editors and managers after being named in the group’s campaigns

“When we talk about harassment and stuff online, we see the public face of it … That sort of stuff starts to have professional repercussions if you have somebody mobilizing hundreds and hundreds of emails to your manager saying that you did a bad job,” Woods said. 

“What you should really be vigilant about and care about is the stuff when it starts to bleed into the real world. The other stuff — everybody is best served by paying it no mind, and educating yourself,” they added. 

When media workers are targeted with defamation on social media and harassment campaigns start to threaten consequences offline, what legal options are available? Lawyers say the answer depends on whether the posts meet the legal definition of defamation, who published them, and what defences may apply.

Basics

The core principles of defamation law apply on social media as they do in print. But social platforms can change how quickly statements spread, who can be identified, and how remedies work in practice.

A defamatory statement in a newspaper will be just as defamatory on Instagram or TikTok. Equally, publishers on social media have access to the standard libel defences: truth, absolute or qualified privilege, fair comment and responsible communication.

But social media defamation cases face additional challenges: posters are often anonymous, and jurisdiction can become more complicated — both factors which may affect efforts to collect damages or enforce a judgment. There’s also the question of how much responsibility social media and other tech companies bear for speech on their platforms, which is unresolved in Canada. 

These and other factors can make it difficult for people defamed online to get any kind of effective remedy, explained Hilary Young, a defamation law expert and professor at the University of New Brunswick. Cases can be complex, expensive and slow, she said. 

Still, going to court can be an option to get defamatory statements taken down or to seek damages in cases where the poster or platform haven’t responded to requests to remove them. 

In Canada, someone complaining about a statement which they believe is defamatory must prove three things — that the statement names or refers to them, that it was communicated to at least one third party and that it would make a reasonable person think less of the person mentioned. 

Canadian courts have said that mere insults or general abuse are not necessarily defamatory, Young noted — the key issue is whether the words convey a damaging allegation of fact. Still, she argues that when it comes to online defamation, the law sometimes doesn’t do enough to consider the often hostile, hyperbolic context of online communication, where people regularly make wild accusations that most reasonable people wouldn’t take literally. 

One possible solution, she noted, could be to apply a “serious harm” standard, which requires plaintiffs to prove that a statement could seriously harm their reputation. England and Australia have both done this; the bar is lower in Canada, where the law relies on the reasonable person test. At the same time, Canadian law includes defences and public-interest protections that can defeat weak claims early.

The Supreme Court of Canada held in 2011 that a bare hyperlink, on its own, is not ‘publication’ of the linked material for defamation purposes. But liability can arise where the surrounding text adopts, repeats or endorses the defamatory allegation. Courts look closely at context — including whether the person sharing the link is simply pointing readers elsewhere, or is effectively amplifying the allegation to a wider audience. 

On the other hand, if someone makes a defamatory statement and then spreads it to a larger audience by sharing a link to that statement on their own social media, in some cases courts have cited this as a reason for increased damages.

Anonymity and damages

Working out who to take to court is often one of the first issues with online defamation cases. Although anonymity can limit legal options, it isn’t a shield against lawsuits: anonymous posters can still be sued, and in some cases, a court may order a platform or website to provide information that could unmask people behind defamatory statements. 

That information can then be used to serve the person with a libel notice. In cases where the person can’t be identified, sending that notice as a reply to the defamatory post may be an acceptable option, Young explained. 

Even if the poster can’t be identified, or if they don’t respond to a lawsuit, a default judgment will likely help to get an order forcing a platform to take down the content, Young said — although damages are unlikely in such cases. Even with a judgment, collecting damages can be difficult if the defendant has limited assets or is outside Canada.

Damages are often an issue more broadly with online defamation cases, she noted: whether the poster is anonymous or not, it may be difficult or impossible to collect anything because, unlike traditional media companies or other professional publishers, posters on social media may not have the means to pay. 

Jurisdiction

With print media or traditional broadcast, deciding where defamation cases should be heard was usually straightforward: if a person in Montreal, for example, complained about something written in a city newspaper, that’s likely where they would argue their case. 

Online defamation cases often involve multiple jurisdictions, because statements on the internet — and social media in particular — are available in many different places at the same time. In general, Canadian courts have said that if a statement is read or accessed in Canada, the case may be heard here. But a court may also decide that Canada is not the most suitable jurisdiction, and the case should be heard elsewhere. 

To untangle those possibilities, courts ask whether there is a ‘real and substantial connection’ to the forum. This involves a number of factors, including whether a person has ties to a place — essentially whether they have a reputation there which can be defamed, as well as whether the person who was defamed can show an effect in a particular jurisdiction. Courts also look at whether the alleged defamer has assets in the jurisdiction, or if another jurisdiction would be more convenient for witnesses. 

A key Supreme Court case, Haaretz.com v. Goldhar, involving this question, concerned a Canadian business owner who sued Israeli newspaper Ha’aretz over its reporting on his management of a soccer team which he owned in Israel. Although the reporting primarily concerned issues in Israel, the business owner argued that he should be able to sue in Canada because the article was also available and read by a small number of people there. In a narrow decision, the court disagreed, finding that Israel was the better forum.

The Supreme Court also considered internet jurisdiction in an earlier case involving Canadian newspaper publisher Conrad Black, who claimed he had been defamed by senior associates of the media company which he founded, and from which he was convicted of defrauding an estimated $6.1 million. 

Black complained that his reputation had been harmed by press releases posted on the company’s website, which were later published by Toronto media. 

The company and most of the defendants were based in the U.S., which they believed was the right place to pursue the case; Black had also renounced his Canadian citizenship. Still, Black argued for it to be heard in Ontario because he had long-standing connections to the province, and because the press releases were accessed by people and published by newspapers there. Black and his former colleagues sued each other back and forth over several years; finally, in the 2012 Breeden v. Black case, the Supreme Court agreed with Black, finding that while the case could also be argued in the U.S., it could go ahead in Ontario because of the re-publication in Ontario newspapers, and Black’s reputation there.

Jurisdiction can also be an issue when it comes to trying to enforce a judgment or collect damages. In Canada, provinces will enforce defamation judgments from other provinces — but it gets more complicated when the defamer is in another country. This is a common problem with social media cases. For example, one recent case, Durand v. Higgins, touched on at least three different jurisdictions: the lawsuit, heard in Alberta, involved a Californian who had re-posted sexual assault allegations on Instagram about a Quebec musician. The musician successfully argued the case should be heard in Alberta because one of his concerts in the province had been cancelled after the poster tagged the concert promoter in a post describing the allegations. 

Enforcing Canadian judgments abroad can require separate legal steps in the country where the defendant or assets are located.

After the court ordered a $1.5 million penalty, the person behind the posts refused to pay because, she said, “it’s in a different country.” U.S. law is on her side: the 2010 SPEECH Act specifically bans American courts from enforcing judgments in cases where the statements wouldn’t be considered defamation under the far more permissive American law. 

Platform responsibility

In the United States, federal law provides broad immunity to online platforms for user content. In Canada, courts have addressed jurisdiction over platforms, but the scope of intermediary liability — and when a platform becomes a ‘publisher’ — continues to evolve.

“Normally, the crux of defamation is publishing defamatory words — but the definition of publishing is very, very broad,” Young said. This is still a matter of debate and ongoing legal policy discussions, she noted. Ideally, she said, Canadian law can find a balance that avoids chilling online speech while also requiring platforms to make reasonable efforts to prevent harmful content. 

How courts resolve that question will affect possible remedies for people targeted by social media defamation: if tech companies ultimately have additional responsibilities to police content on their platforms, that may provide new avenues for people to ask for content to be taken down; it would also open up a clearer path to sue platforms for defamatory content they host. 

This was a key question in a 2019 defamation lawsuit filed by B.C.-based billionaire Frank Giustra, who sued Twitter over posts from users who accused him of pedophilia and a dog’s breakfast of bizarre claims, including connections to the Pizzagate and QAnon conspiracy theories. 

Twitter argued the case should be heard in California, where legislation protecting platforms from defamation claims would have made it unsuccessful. The B.C. Supreme Court and Court of Appeal allowed the case to go ahead in the province, but didn’t address whether Twitter could be held responsible for the posts. Giustra and Twitter settled in 2023, so the question remains open. 

Responsible communication

Defamation allegations are — thrown around routinely on social media, often levelled without basis at journalists by those who take issue with their reporting. 

Occasionally, these claims make it to court — like the unsuccessful effort by two Ontario doctors tosue journalists who reported on their advocacy for ineffective treatments to COVID-19 and opposition to vaccines and public health measures in response to the pandemic. 

As publishers on social media, journalists need to be aware of all of the same standards as in print; they also have access to the same defences if accused of defamation. Even where a defence may exist, journalists and commentators can still face cost, delay and risk if they publish or repost unverified allegations.

The newest defence against defamation, responsible communication, was developed in the context of traditional journalism, and there are still few examples of how courts will define “responsible” in a social media context. 

The defence was established in Canadian law in the 2009 Supreme Court Grant v Torstar case — coinciding with the peak of the blogging era. As it was being worked out in lower courts, media workers and legal scholars debated how or to what extent it would be applied to self-publishers and others working outside of the traditional media. The Grant case settled this: the Supreme Court concluded that people writing on their own blogs could also use the defence within the same contours as journalists in a traditional newsroom. 

To use the defence successfully, publishers must show that they took reasonable steps to ensure their reporting was accurate and fair. Canadian courts have set out a general idea of what that looks like — including verifying the information and considering the reliability of its source, seeking comment from the person named and including their response along with other relevant information. 

In theory, Young said, courts should take the context into account when determining what it means to be responsible, although in practice, she noted, some courts have approached the defence more like a checklist of steps publishers need to show to make their case. 

It’s less clear what an equivalent process would look like for something like a tweet or other short post on social media, where there is typically far less space for background information. “Courts have struggled to apply that defence outside of the most traditional journalistic contexts,” Smith said. 

“It’s clear enough that the test is flexible, that it’s not meant to be a checklist, that the actual question at issue is whether the person acted responsibly before publishing — and that is context-specific, so it would be wrong not to consider the fact that something is published on social media and therefore maybe it doesn’t make any sense to provide all this context or background information,” she explained. 

Many of the factors relevant to the defence are “highly journalistic in nature and don’t make a lot of sense in other contexts,” she explained. 

Still, the safest approach is the status quo: “You wouldn’t put anything on social media that you wouldn’t put in another forum, because you could be called upon to defend it,” Young said.