I’ve wanted to talk about Gamergate for quite some time now. As a self-proclaimed casual gamer, seeing the ugly underbelly of the video game world exposed is both harrowing and eye opening, and necessarily provokes a lot of thoughts. For those who don’t know, Gamergate is a social media phenomenon (see also #gamergate) wherein a critique of video game journalism rapidly descended into a misogynistic free-for-all involving widespread harassment of female game designers and game critics. While discussions about sexism and misogyny in gaming did not originate with Gamergate, the extremity of the backlash in this instance does seem to be unique, and has led to numerous discussions about gamer culture, internet culture, and the host of problems they present. I don’t want to oversimplify what is inevitably a complicated issue, but it’s hard to deny that the reason #gamergate became such a big deal is because of its implications for gender politics.
Quick background: around August 2014, the ex-boyfriend of video game developer Zoe Quinn insinuated that her current relationship with video game journalist Nathan Grayson led to overly favorable reviews for her recent game Depression Quest, and led to significant e-harassment of both Quinn and her female supporters. Those supporting Gamergate claimed they were concerned with ethics in video game journalism, whereas their critics pointed out their failure to identify any serious ethical concerns (Grayson never reviewed Quinn’s games during their relationship) and that the Gamergate controversy was really a front for a culture war against the diversification of the “gamer” identity and the increasing criticism of the socially regressive elements of video games. Most striking about the supporters of Gamergate was their focus on women for harassment, including threats of death and rape, while ignoring similar arguments raised by men.
Gamergate’s abundance of issues makes it difficult to tackle; it seems like half of the debate is over what specifically should be debated. Yet the e-harassment of women has surely been the single biggest point of discussion, both because of its vicious nature and because of the difficulty in remedying it. Indeed, Twitter’s inability to effectively police the bad behavior of its users led some to completely abandon the social media platform. And while the methods of harassment certainly warrant discussion (particularly doxing) if we want to curb e-harassment, I’d like to focus on the architectural problems of these websites that facilitate online abuse.
There is certainly no shortage of verbal bile that is spewed in any online gaming environment, but Gamergate is unique in that it is primarily anonymous harassment of social and cultural commentators through their public media outlets, especially Twitter. Those who are attacked are easily identifiable through their official Twitter handles, whereas the attackers are primarily new accounts, created specifically for the purpose of harassing. In this manner, Twitter’s process of account blocking and banning are rendered ineffective, as the harassers are in no way impeded from creating new accounts to further harass and the process of creating a new Twitter account requires only an email address. (I actually tested this. Ironically, the hardest part was coming up with a username. Apparently all permutations of MyFakeTwitter123 are taken.)
A Series of Boxes
The problem Twitter identifies is that of the structure of social media sites generally, so I think it’s worth considering a taxonomy to help categorize the various ways that users interact with other users online, and specifically the degree of identifiability that is communicated between users in a particular social network.
I would divide the internet into three boxes. The first box I’ll call the total identifiability system. This is where every user in the social network is fully identified to all those they interact with. The most popular example would be Facebook. This system is the closest parallel to reality, as any harasser is known (or at least knowable) to their target, and they also know that any harassment will be traceable back to them. This system of high identification provides the greatest inter-user accountability, and allows for the site administrators to police bad behavior effectively. This efficacy of policing, coupled with the relative difficulty in creating a new false profile prevents bad actors from engaging in recurrent, harassing behavior. And even if the site moderators fail to take action, the fact that the harasser is identified enables social policing that nonetheless stymie bad behavior. (Indeed this led to the amusing outcome where a target of e-harassment responded by contacting her harassers’ mothers.)
The second box I’ll call the total anonymity system. This is the opposite of the total identifiability system, wherein no user has any information about any other user. Information is posted anonymously, commented upon anonymously, and provides no direct means of contacting a specific individual, as there is no specific identifier by which to track them. The example I would point to is Yik Yak. Yik Yak is essentially an anonymous local bulletin board, where users post anonymous comments, which can then be anonymously replied to. Popular comments are up-voted and unpopular comments disappear with enough down-votes, with the primary draw being that posting and commenting is limited to within a relatively small distance (so fellow Yik Yak users are all physically nearby). Yik Yak is particularly popular among younger generations, and especially on college campuses and in high schools. Yik Yak, and all total anonymity systems, effectively prevents direct harassment because of the difficulty in identifying other users in a totally anonymous environment. This is coupled with a policy that prohibits harassing comments generally, and can ban devices for those who are repeat offenders (such as posting anonymous comments singling out individuals, who are likely to be known given the physical proximity of the users involved.) While arguably more amenable to harassment than total identifiability networks, total anonymity makes harassing behavior difficult, and limits any repercussions that may be felt by the target.
The third box is the mixed system. The mixed system allows for the identification of individuals by their real world identity, but also allows for anonymous or pseudonymous users, and has little accountability for the later. This, unfortunately, is the system employed by most of the internet, and is the primary problem with e-harassment in the Gamergate controversy. There is a reason most of the discussed harassment of women occurs via Twitter: Twitter is currently the most popular mixed social media platform. While many users on Twitter are identified (The Rock, President Barack Obama, Pope Francis), there is no requirement that users be identified, allowing individuals to create parody profiles, alter egos, and profiles specifically for harassing. Given the exceptional ease with which Twitter allows you to create new profiles, there is effectively no accountability through the banning of the pseudonymous accounts, as harassers frequently create new accounts to continue their harassment. This coexistence of anonymity with identifiability at the user’s discretion is at the heart of the problem of e-harassment, and in its structure is the potential solution to much of the difficulty presented by e-harassers.
The Strong and the Weak
My discussion thus far has focused on the structure of social networks vis-à-vis the users, but this isn’t the only relevant relationship. Within each of these social network structures we must also consider the relationship between the user and the network itself. I will describe this relationship as being strongly identifiable or weakly identifiable. Although identifiability exists on a spectrum, for our purposes I think this division is appropriate. And I should note that for the purposes of this discussion I am keeping the level of identifiability distinct from the potential recourse that is employed by the site moderators. What the moderators choose to do with the user’s identity may vary from banning users to suspending accounts to nothing at all. The point emphasized here is not what is done, but what they can potentially do: a moderator with only an email address has little power over the actual user, as creating new accounts is extremely easy; whereas a site administrator with a greater amount of information about the user can more effectively punish those that violate site policies.
Weak identifiability would be any system with minimal requirements to interact in a social network environment. Typically these only require an email address, and may potentially not even go that far. Although theoretically possible in a total identifiability network, in practice weak identifiability arises in mixed or total anonymity networks, as these systems require less structural reinforcement to ensure their network’s validity. A weak total identifiability network would have no means of verifying a user’s stated identity, making it a de facto mixed system. These weak systems, although easy to implement, enable harassing behavior by limiting the oversight capabilities by the site administrators. And while most, if not all social networks oversee user accounts, this has little practical effect when the barriers to creating a new account are effectively nil.
Strong identifiability, by contrast, refers to a system where the social network has a large amount of information about the individual users, regardless of whether the site itself requires the users to identify by their real names to other users. This most often occurs in situations where money is involved, but can theoretically be employed by any site that wishes to be able to hold its users accountable for their actions. The most popular example of such a system is again Facebook, which requires users to provide their real names, and challenged accounts require real world identification to validate their identity. While this system does not completely prevent false accounts, it makes such accounts much more difficult to maintain if used in a harassing manner.
Strong identifiability can also be used in mixed systems, as is often seen in paid online gaming. I’ll use the example of Battle.net, the account management service for Blizzard games (Starcraft, Warcraft, and Diablo). Battle.net is a cross platform game manager, and allows for many different forms of interaction between its users, ranging from a “Real ID” (allowing users to identify by real name to each other) to a “Battle Tag” (a cross platform pseudonym); to having no outward account connectivity (each character or account on each game is separate). Yet in all of these cases, Blizzard requires the user to maintain a single Battle.net account, so that Blizzard can enforce its policies effectively. And while this system is not foolproof (players can still make free trial accounts), the difficulty (and cost) of creating multiple separate accounts is an effective deterrent to bad behavior.
Ultimately, the important point about strong and weak identifiability is the degree to which moderators are able to enforce violations of their policies. And although the various boxes of identifiability play an important role in preventing e-harassment, the true problem is that most websites only employ weak identifiability. The continuous push towards ease of access as a means of encouraging a larger user base also facilitates this simple workaround: when you make your site too easy to join, you effectively eliminate the impact of policing bad behavior, especially if the bad actor’s ties to any given account are relatively weak.
Application and Solutions
As might be clear by my discussion of the various boxes and their variable strength, I think the easy solution to the majority of e-harassment cases is a push towards strong identifiability for all social media sites, regardless of which box they fall into. Even a fully anonymous network can require the users to be known to the network, and by doing so, this allows the network administrator to effectively police bad behavior. Networks should require users to create a single, strongly identifiable profile, from which they may generate multiple pseudonymous or anonymous profiles as they wish. This would maintain the desired level of identifiability between users, while ensuring that the network could hold those users accountable.
This is not to say that I think strong identifiability should be mandated. Rather, it should be a logical step for the larger, more widely used networks as a means of ensuring a consistent user base. Although Twitter currently has a near monopoly on its particular brand of social media, it is not unthinkable that a Twitter 2.0 could emerge that would utilize stronger identifiability, and thereby circumvent the problems of e-harassment. As the push for greater accountability against e-harassers increases, websites that prioritize accountability will necessarily be in an advantaged position, and ultimately overtake those that do not. Therefore, the logical response for mainstream websites is to proactively seek out greater accountability to ensure their continued appeal.
Obviously this solution has its difficulties. It relies on the assumption that creating new accounts is difficult, despite the fact that encumbrances on the accessibility of a new social network can be fatal, hence why websites try to make signing up as simple and easy as possible. Many users are also uncomfortable with the prospect of providing more personal information to these corporations given the uncertainty of how it will be used. And this is not to mention the difficult burden placed on the network itself to police the content of its users in an environment that thrives upon free speech. Threats are notoriously ambiguous, and this ambiguity is compounded by the difficulty interpreting them in an online environment. (This issue is currently pending at the Supreme Court.) And perhaps most fundamentally, these networks are big. Policing a social network of 271 million active users is a daunting task, even when you have the means to hold individuals accountable.
I don’t have any illusions that this will be a magic solution. I don’t even have any illusions that strong identifiability will be widely adopted. But I do think that movement towards greater accountability online would be generally beneficial, whether with individual sites or in the form a generalized e-citizenship, as was recently implemented by Estonia. The later is a particularly interesting development, and may warrant an independent discussion. But that’s for another time.