I finally got around to listening to the oral argument for the Supreme Court case Elonis v. United States, which attempts to determine when threatening language online can be proscribed under the First Amendment, and I’d like to talk about it. The case involves a man who wrote threatening rap lyrics on his Facebook page about killing his ex-wife. I always find First Amendment debates fascinating, and the application of old First Amendment law to modern technologies seems right up my alley.
At its core, the case is very simple. “True threats” are one of the categories of speech that are exceptions to the First Amendment, and therefore not subject to “strict scrutiny,” the highest legal standard. The question is two-fold: what mens rea requirement do we apply to true threats, and does communicating a threat through social media implicate the defendant’s mens rea?
The first question is a classic Model Penal Code mens rea problem: do we require purpose, knowledge, recklessness, or negligence. Purpose is the highest standard: we only convict if you spoke with the purpose of threatening. (This will be illegal under all cases.) Knowledge is a bit hazier: you didn’t say it to threaten, but you knew that they would feel threatened by it. Recklessness means you recklessly disregarded that they might feel threatened, and negligence appears to be a reasonable person test, with no regard for what the defendant thought. At Elonis’ trial, the Jury instructions effectively applied negligence (“a reasonable person would foresee it would be interpreted as a threat”). The government is arguing for negligence and Elonis is arguing for knowledge (originally he argued for purpose, but has retreated somewhat).
The challenge here is that threats involve two layers of subjectivity: what the threatener thought the threat meant, and what the threatener thought the threatened person would think the threat meant. (You might have to read that sentence twice.) And this challenge is not helped by the Court’s precedents, which basically say that a threat can be punished when the defendant intentionally conveys a true threat. (In short, “threats” are protected speech; “true threats” are not.) This proximity in terms makes discussion of the subject matter difficult. The mens rea at issue is that of the true threat. The first “intentionally” is somewhat superfluous and misleading, as obviously the speaker intended to speak the words (the mere threat). But this does not necessarily mean that the defendant must intend that the threat they spoke be a true threat; that is the issue.
The justices’ questions during oral argument gave some indication of how they were leaning, but there was little in the way of consensus. Each Justice (apart from Thomas, characteristically silent,) seemed to have their own particular issue they wanted to discuss, ranging from the difficulty in proving a defendant’s mindset (Ginsberg) to what audience we should use to determine a reasonable person (Alito). Justice Kagan seemed to be pushing for a recklessness standard as a middle ground, while Justice Roberts seemed concerned with how to distinguish between true threats and expressive speech, particularly rap music. (This included some wonderful Supreme Court rap readings.)
One particularly peculiar problem was Justice Scalia’s insistence that threats require purpose because any other standard would incriminate simply conveying truthful information to warn another: e.g. reporting that another student had a bomb. (They spoke the words knowing that it would be interpreted as a true threat.) The point this seems to ignore is that threats typically must convey violence in the speaker’s control. It’s not enough to say that a bomb is in the building; you must either be stating or implying that you have control over the bomb. The only situation where this wouldn’t apply is the person who falsely shouts fire in a crowded theater, purely to sow panic. And while this is almost assuredly illegal, I don’t think it should be classified as a true threat.
Where the court will ultimately fall on mens rea seems pretty unclear. First Amendment jurisprudence in other areas would suggest purpose, the highest standard, and this is actually reinforced by most state threat statutes, which require proof of a subjective intent to threaten. But the majority of the Courts of Appeal have interpreted the federal statute to apply a reasonable person standard. And while this Court has typically been quite speech protective, many of the Justices seemed to be leaning towards a knowledge/recklessness standard. I suspect they will ultimately resolve for purpose, but that’s relying on First Amendment jurisprudence generally, and not their representations at oral argument.
The emphasis on mens rea at oral argument actually struck me as odd, as most of the discussion around this case has been on the Facebook element: is a threat treated differently if it is sent via social media? The only discussion the court raised on this issue was whether a Facebook threat impacted what a “reasonable person” meant under a negligence standard. The government’s proposed reasonable person test relies heavily upon the context of the threat, so something like the number of Facebook friends of the defendant might impact the inquiry.
While I think the Court was correct in identifying the importance of context, I am uncertain it’s relevant to the reasonable person analysis. The government’s test defines “reasonable person” based on the visibility of the post, with larger audiences creating a more generic reasonable person. This strikes me as at best backwards. Such a framework holds the defendant hostage to the whims of their listeners and fundamentally misunderstands social media by actually punishing those with a large number and variety of Facebook friends. The great irony is that Facebook’s success stems from its lack of contextual requirements for information sharing. By default, all of my friends (and indeed their friends) can see everything I post. There is no guarantee that they will “get” what I’m saying, and I don’t tailor my statements to be understood by everyone who sees them. To assume otherwise artificially dilutes what a “reasonable person” means and actually disregards the context that might be relevant to the inquiry.
This is indicative of a larger problem with this case: Facebook threats are largely public speech, and we don’t hold public speech to the interpretations of the public. Threats are generally interpersonal communications: the fish in a newspaper; the suggestive letter nailed to a door. A threat made via social media takes on a whole new character, as it is as much about the social media audience as it is about the person threatened. The defendant might be making a political statement, attempting artistic expression, or simply “venting” his anger, as argued in this case. If we hold speech to the interpretation of its audience, we practically guarantee that someone will interpret it as a threat, regardless of what the speaker intended, or even what the threatened party felt.
This is the fundamental problem with the government’s position: information online is too easily available. Elonis actually raised a perfect example: a teenager playing an online video game said to an opponent that he would “shoot up a kindergarten and eat one of their still beating hearts.” While the opponent understood that this was intended as a gruesome joke, another listener reported it to the police, who arrested the teenager with a trial still pending. This type of hyperbolic, violent speech is all too common in online gaming, but that doesn’t ensure that its listeners will understand it this way. Who is the reasonable person here? A teenager? Anyone playing this video game? It is simply too difficult to control who sees the information we place online, and we cannot restrict speech based upon an audience that is in many ways unknowable to the speaker.
Which is not to say that I think a threat’s status as a Facebook post is irrelevant. Rather, it should be considered in the facts and circumstances that give insight into the speaker’s state of mind. When someone posts a Facebook rap on their wall that fantasizes about murder (as in this case), that seems less like a true threat and more like bizarre performance art. The same rap written on a piece of paper and nailed to the victim’s front door looks a lot more like a true threat. Or in Facebook terms: a wholly private post on another’s wall or a private chat. Whether they posted the threat on their own wall or their target’s, who could view the threat, the words they used, all of these should be considered in evaluating what the speaker was thinking. But none of these should implicate who we determine a reasonable person to be.
Is a Facebook threat valuable?
Although First Amendment law rarely allows for subjective valuation of speech, much of the government’s argument rested upon the notion that threatening speech isn’t valuable. And while I might be willing to concede this for purely private threats, I don’t think this logic applies to threats transmitted publicly via social media. A threat made privately to another individual only has value to the speaker, and that value must be balanced against the very real harm done to the victim. But a threat communicated publicly arguably has value to the whole community, even if its message is repugnant to the vast majority of its listeners.
For example, the Supreme Court case Watts v. United States had a draft dodger publicly threaten to shoot President Lyndon B. Johnson, and there the Court said it was “crude political hyperbole,” not a true threat. It wasn’t tasteful, but it was important speech. And while Elonis’ statements aren’t political, I don’t think we can say objectively that non-political public threats are inherently valueless. Although Justice Scalia grumbled a few times that this speech isn’t particularly valuable, I think he meant that he doesn’t find this particular speech to be valuable, which is legally irrelevant. If a class of speech potentially has value, we don’t adjudicate an individual’s First Amendment rights based on our subjective valuation of what they said.
Yet even if we evaluate the speech at issue, I think it would be difficult to say it’s valueless. Elonis’ Facebook posts were in the form of stylized rap lyrics, many of which comment on the speaker’s free speech rights.
Did you know that it’s illegal for me to say I want
to kill my wife?
It’s indirect criminal contempt.
It’s one of the only sentences that I’m not allowed
Now it was okay for me to say it right then because
I was just telling you that it’s illegal for me to say I
want to kill my wife.
Elonis goes on to insinuate that he will blow up his ex-wife’s house with a mortar launcher, all while framed as a discussion of what is legal and illegal to say. And while it wouldn’t be unreasonable to conclude that he made these statements to put his ex-wife in fear, it’s also difficult to say that the speech is completely valueless.
And even Elonis’ raps that amount to little more than fantasies about murdering his ex-wife would be difficult to distinguish from those of a famous rapper like Eminem. The government, perhaps showing its weakness on this point, attempted to dismiss these comparisons with “clearly” arguments: Eminem is an entertainer, clearly no one would interpret his lyrics as true threats. And while I agree Eminem isn’t likely to be interpreted as threatening anyone, free speech is not enjoyed exclusively by “entertainers,” and I don’t see a principled distinction between Eminem’s lyrics and the Facebook posts in this case.
Yet if we take the view that public “rap-threats” are legal, are we creating a framework for legally threatening others, as Justice Alito suggested? You can threaten to murder someone as long as it’s posted publicly on Facebook; or as long as you put an LOL at the end; or as long as you phrase it as a rap. Is a threat announced publicly on Facebook any less threatening to its target? But on the other hand, that is kind of the point: we want to have clear indications of what is a true threat and what isn’t, and if adding an LOL to the end is the distinguishing factor, so be it. The Court’s two motives are in conflict: it wants clear rules for the speaker, but it also doesn’t want arbitrary distinctions in what should be a holistic analysis.
Personally, I think all of these considerations warrant the use of a purpose standard. We don’t need to establish bright line rules that raps are protected per se; we simply take these factors into consideration when determining the mindset of the defendant. If a jury believes beyond a reasonable doubt that the defendant’s public Facebook rap is truly a veiled attempt to put their target in fear of bodily injury, they can convict, notwithstanding its feigned artistic expression. And while I am sympathetic to the situation of the threatened person, social media provides ample opportunities to block, ignore, or otherwise not be exposed to the threatener’s speech, and attempts to circumvent these mechanisms could be prosecuted as harassment or used as further evidence of purposefulness.
I dislike the knowledge standard proposed by Elonis because this places the speaker at the whims of those about whom they speak. Should we say Eminem cannot rap about his ex-wife (he has written several) because she tells him she feels threatened by it? That would satisfy a knowledge standard. And must the threatener know that the fear is real? What if the threatener thinks they are lying? And what if an unconnected listener feels truly threatened, despite there being no rational reason for them to feel that way? I can certainly conceive of ways to make such a system workable, but ultimately I think placing free speech rights into the hands of listeners is antithetical to the concept of free speech.
My mantra with regard to the First Amendment exceptions is that they are slowly being narrowed until they actually satisfy strict scrutiny. Fighting words only applies when someone’s words literally force you to attack them, a concept I find comical; incitement similarly requires your words to have an immediate impact on the listener, such that there is no time to intervene between the speech and the violent activity advocated; and obscenity requires the material to have no serious literary, artistic, political, or scientific value. They aren’t all gone: false statements of fact are still regulated; as is speech that is owned, as in copyright. But each of these exceptions is being continuously narrowed to allow the maximum amount of speech possible, and is only restricted when there is a compelling government interest. So while I don’t think threats will ever be entirely legal, the trend line of the First Amendment suggests that they will exist only in a narrowly tailored form, and only in response to a compelling government interest.
I was going sign out with a “rap” pun, but it was so bad it should be criminal.