The mental image is so familiar: a single person, probably seated, in a room filled with computer screens, each one showing a small feed from a security camera. Who do you picture in the chair? A police officer? A night watchman? The NSA in human form? Wrong! The person in the chair is you! That is the gimmick of Insecam, a website that capitalizes on the widespread failure of users to change the usernames and passwords to their web-cameras by streaming their videos online.
I’ll repeat that.
Insecam connects to web-cameras, guesses that that their usernames and passwords will be the default, and posts the live video online for everyone to see. Upon entering the website, Insecam gives you a slightly haughty mission statement: “This site has been designed in order to show the importance of security settings. To remove your public camera from this site and make it private the only thing you need to do is to change your camera password.” Right.
Now I know what you’re thinking: this cannot be legal. And you’d be right. (Unless you currently own a web-camera, in which case you’re probably thinking “S***!”) Frankly I’m amazed the website is still up and running, given that it was just recently covered on CNET and CBS. (I don’t have any secret sources, so if I’m talking about something it’s probably fairly mainstream.) As best I can tell, the site manager seems to think that the failure to change the default username and password is effectively making the content publicly broadcasted. The mission statement even says as much: all you have to do is change your password and your camera will be private again. Unfortunately for Insecam, (and fortunately for everyone else), this is not the way it works.
I’ll briefly discuss the law here, before moving on to more interesting stuff. The relevant law would probably be the Computer Fraud and Abuse Act (CFAA). Although the US has quite a bit of legislation regulating online content, the CFAA is by far the most commonly used, probably because of its massive scope. The CFAA was enacted in 1986, long before the Internet’s ubiquity had taken hold, (did people even have computers in 1986? what did people do at work?) and was designed, therefore, primarily with government computers and smaller networks in mind. The key language, “protected computer,” consists of 1. government computers; 2. computers used by financial institutions; and 3. computers used in or affecting interstate commerce or communication. Discerning minds will probably realize that the third category basically applies to every computer connected to the Internet, meaning every computer.
Ok, so the CFAA protects every computer (but not typewriters or calculators, oddly), but how does it protect them? It criminalizes “unauthorized access.” This is basically what it sounds like: if you access someone else’s computer when you aren’t authorized to do so, you will probably be in violation of the CFAA. (I should note, for you mens rea hawks, the CFAA typically requires the unauthorized access to be knowing or intentional. If you accidentally access a protected computer without authorization, you’ll probably be ok.) There are some more nitpicky points as well, but in general, the CFAA provides very broad protection. (Indeed many think the CFAA is too broad: by its definitions, it purports to regulate almost every computer on the planet.)
So let’s apply this to Insecam. The web-cameras are certainly “protected computers”, as they are “used in or affecting interstate commerce or communications”; and Insecam is certainly intentionally accessing them. The only real issue that could be raised is on the point of authorization. Insecam, I imagine, would argue that it was authorized to access the web-cameras, citing their failure to change their username and password as evidence of their public nature. Except this is an absurd argument. The fact that you can guess someone’s username or password in no way negates its existence. (The case on point is from 1991, and calls the Internet “INTERNET.” Ah, the 90s.) If there were no password at all, this might be a slightly better argument, but it still amounts to saying “it was just so easy, they were asking me to do it.” I’m sure many a bank robber has argued that the bank was “asking to be robbed,” but I imagine the judge would just say they were “asking to be convicted.”
As a brief aside, Insecam probably considers itself a part of the much larger tradition of white-hat hacking. Essentially, these are people who expose security vulnerabilities in computer networks and inform the owner of the flaws. (The white hat signifies that they are really the good guys, although I wouldn’t be surprised if many actually wore white hats. Hackers can be quite self-aware.) Many are paid professionally to attempt to hack in this manner, and some companies even hold competitions challenging white-hat hackers to try to breach their systems. Yet Insecam is not a part of this tradition. There has been no attempt to inform the users of their security vulnerabilities; Insecam is simply capitalizing off of (or at least laughing at) the users’ ignorance. It’s akin to stealing from an unlocked house to “teach them a lesson.” Although Insecam has arguably had an impact, (news reports of the website led to some web-camera manufacturers altering their default settings to require users to change their passwords), it could have easily achieved the same ends without invading thousands of users’ privacy. The public disclosure of the security flaws, as well as the apparent lack of any effort to contact those being exploited show that this isn’t security advocacy, it’s simply exploitation.
This raises several potential discussion points. The most obvious is how you need to force security on the lay-consumer. This was ostensibly Insecam’s purpose: to kick these security illiterate people into shape. Changing your password from the default setting isn’t exactly asking a lot, but it’s still more than many people will do without provocation. This is why we typically put the onus on the manufacturer: rather than allowing people to maintain default passwords, require them to change it before completing installation. Or at least provide a randomly generated password with the hardware, like the obnoxiously convoluted router passkeys we all hate. They are annoying, but they provide much better security, and they are extremely easy to implement.
This is actually fairly representative of security generally. Despite constant news of cyber attacks and data breaches, security is actually fairly easy from a conceptual standpoint, especially for the lay-public. The basics for good security are all fairly straightforward, easy to implement, and freely available online. The problem is getting people to use them. Security is a hassle, and most people don’t understand or just don’t want to use these technologies. For instance, two-step authentication is substantially more secure than simple password protection, and available for many online platforms. But two-step authentication requires two steps, and most users are frustrated enough with the password requirement. Every website seems to have slightly different password requirements, meaning it’s really hard to keep reusing that one password we like. Sure we know we shouldn’t reuse passwords, but there’s no way we are remembering 50 unique passwords, and do we really care if our Pinterest account gets hacked?
One of the nice things with modern authentication is the degree to which it can be minimized. Users tend to be unreliable, and typically dislike the process of authentication. Instead, we look to other characteristics that suggest authenticity without actually asking the user to do anything. For instance, behavioral tracking is becoming increasingly common. If you frequently act in a certain way, (say, logging on to Facebook from a certain device), that’s a trait that can be tracked which suggests the user’s authenticity. For lower risk content, like merely accessing data, conformance with enough of these factors is typically sufficient. It is only when higher risk behavior, like purchasing something, that the authentication may be brought back in. This is also a means by which banks and credit card companies detect identity theft and credit card fraud. Most people follow certain purchasing patterns, and deviations from those patterns are grounds for suspicion. The IP address, the device, the web browser, and even the time of day are all potential metrics that can be tracked without requiring information from the user.
My hope is this will be part of a larger shift in authentication, generally. Passwords are an example of knowledge factors, where you attempt to authenticate the user based on something only they know. Two-step authentication normally adds to this by requiring a possession factor, which checks for something only the user has. This typically takes the form of a token that generates strings (collections of random numbers and letters) periodically, but could be any physical item that can be readily authenticated. (The commonly used example is an ATM card and its corresponding PIN. You need to possess the card and have knowledge of the PIN to withdraw money.) Although often both the knowledge factor and the possession factor result in something akin to a passcode, the addition of the possession factor makes fraudulent access much harder, as it now requires knowledge of the password and possession of the token. And some authentication systems add a third factor, often called an inherent factor. This typically takes the form of a biometric, like a fingerprint, but theoretically could be any identifier that is inherent and unique to the user.
And while traditionally digital security has relied upon knowledge factors, shifting to possession and inherent factors may be the way of the future. The ubiquity of smart phones makes them ideal possession tokens, but their use as such has been limited by the inconvenience of entering the strings generated. Anyone who has used a smartphone for two-step authentication (or any similar token) knows that the process is tedious, and tedium is the bane of any novel security process being adopted by the general public. But if the process were altered such that the mere possession truly was the only requirement (e.g. the phone transmits a short range signal that your computer received, akin to the Apple Pay feature of the iPhone 6) this convenience would make it a realistic alternative to passwords. Furthermore, as biometric sensors become more efficient and more easily integrated into technologies, this could make two-step authentication possible without requiring any input by the user. The user’s cellphone would passively authenticate simply with its presence, and the user’s biometrics would passively authenticate through their contact with touchscreens and keyboards. This would greatly improve security, remove the problem of duplicate passwords, and remove any inconvenience for the user.
Obviously this approach still has shortcomings. For one thing, the assumption that everyone has a smart-phone capable of authenticating isn’t entirely valid. While anyone can create a password, not everyone has a smart-phone, and those that do aren’t always guaranteed to have it with them. Absent something that is truly universal and always in the user’s possession, a smart-phone possession factor will always face challenges avoided by a knowledge factor. And pure reliance on possession factors makes the loss of the possession factor that much more critical. Similar to the reuse of passwords, over-reliance upon a single possession factor makes it a prime target for theft or manipulation. And although biometrics would provide an additional check, this is only effective so long as the information used remains a secret, something that is increasingly difficult in the modern era. One copy of your fingerprint would be all it took to effectively override the security provided by a fingerprint scanner, and you can’t get a new set of fingerprints. And there will always other types of security breaches, like man-in-the-middle attacks, where authentication alone is insufficient.
But then again, perfect security has never really been the goal. No matter how perfectly crafted your security may be, there will always be flaws. The point is to make security good enough to deter attackers from putting in the effort. Just look at one of the most popular forms of encryption: PGP. That stands for “Pretty Good Privacy.” Pretty good?! That hardly inspires confidence. And while PGP is arguably being modest, “pretty good” is all we really need, and anything that makes “pretty good” more easily implemented at a large scale should be seriously considered.
But until then, please don’t leave your password as “password.”