Will Congress Finally Pass Cybersecurity Legislation?

Congress is back in session, so I thought now would be a good time to talk about the Cybersecurity Information Sharing Act (CISA), one of the more prominent bills believed to be a high priority among party leaders (assuming there isn’t a shutdown). The timing for the bill is excellent, with the Ashley Madison hack fresh in our minds, but its passage is currently opposed by several privacy and civil liberties organizations, with the Electronic Frontier Foundation even starting a campaign against the law. And while I like the EFF, I like arguing more, and I happen to disagree with them on this point. Cybersecurity legislation has proved to be surprisingly difficult to pass in recent years, primarily because it always seems to implicate other civil liberties. Yet I don’t think the necessity of modern cybersecurity legislation is up for debate. So let’s dive in.

As CISA’s name suggests, the Act is based around facilitating information sharing between public and private organizations. Information sharing is important for cybersecurity because vulnerabilities are generally not known to the potential victim, and it’s hard to protect yourself against something when you don’t know what to look for. (For example, the antivirus program you hopefully have on your computer is basically just a giant list of viruses, so you don’t want to be missing the newest virus on your list.) Information sharing initiatives were popularized in the 90s, but have only recently picked up steam. The problem thus far seems to have been that information sharing by private companies could entail legal liability, particularly if the “information” was learned because the company was hacked, or if the information was protected privacy laws; whereas information sharing by the government is not always in line with national security interests, as a vulnerability can be exploited by the NSA as easily as it can by a cyber-criminal. Or at least that is the general conception. Furthermore, information sharing can greatly increases a vulnerability’s visibility, potentially increasing exploits, as inevitably some bad people were shared that information. (For example, Microsoft historically provided security patches on the second Tuesday of every month, fixing known problems. This “patch Tuesday” was notoriously followed by “exploit Wednesday,” as all the hackers now knew where the vulnerabilities were, and most people hadn’t installed the patch yet.)

So with that wind-up, let’s turn to CISA. The bill has four substantive sections worth discussion: Section 3 facilitates government vulnerability sharing with the private sector and the general public; Section 4 clarifies the legality of cybersecurity measures utilized by the private sector; Section 5 facilitates private sector vulnerability sharing with the government; and Section 6 grants immunity to private sector actors utilizing CISA. So I’ll delve into each section, explain its likely rational, potential problems, and overall utility.

Section 3 – Government ->Private Sharing

Section 3 is probably the least controversial. This section directs several agencies to create procedures for sharing cyber threat indicators with private entities, and encourages sharing unclassified information with the general public. Basically, it tells government agencies that have cyber threat information that they should make an effort to share that information with the private sector, and to a lesser extent the general public. As far as I can tell, the only real criticism that can be levied against this section is that these disclosures aren’t mandatory, or aren’t structured such that private companies can compel information sharing. But honestly, neither are realistic alternatives, and increased transparency, even comparatively minor transparency, is still a good thing.

Section 4 – Private Sector Cybersecurity Practices

Section 4 is the fun one. This “permits private entities to monitor and operate defensive measures to detect, prevent, or mitigate cybersecurity threats or security vulnerabilities” on their own information systems, or on another’s information system if they have authorization. At first glance, this seems obvious. Surely companies can police their own systems? But then again, what exactly do “defensive measures” entail? CISA’s definition is pretty generic, but includes a provision that “the term ‘defensive measure’ does not include a measure that destroys, renders unusable, or substantially harms an information system or data on an information system not belonging to (i) the private entity operating the measure; or (ii) another entity or Federal entity that is authorized to provide consent and has provided consent to that private entity for operation of such measure.” Translation: as long as they don’t destroy, render unusable, or substantially harm it, private companies are empowered to take “defensive measures” that impact other people’s information systems, even without authorization.

So what could this mean? The primary concern this raises is that it legalizes “hackback”: retaliatory hacking. And while I doubt it goes that far, it seems to go pretty far. The language in CISA is notably passive: measures are “applied” to information systems, and they are still defensive measures. Hiring a firm to hackback after the fact seems too active. But if you hire a firm to actively police your networks, they may be able to approach hackback activities. (Think hiring a guard dog that chases intruders even after they are off of your property. The legal analogue would be “hot pursuit.”) The same logic would apply to active defenses that are entirely automated, effectively allowing for cyber-booby traps. (Something akin to a bank dye pack.) And with computers, you could write a whole software package that executes when stolen, causing all kinds of havoc on the hacker’s computer. And while it cannot “destroy, render unusable, or substantially harm” someone else’s information system or data on that information system, this certainly seems legalize measures that gather data on the attacker.

Is this a major problem? It’s hard to say. I don’t find hackback as troubling as most, but I would like to see more protections for individuals if the bill really does go that far. While we may shrug off aggressive defensive measures against criminals, cyber-criminals are notorious for infecting innocent machines, which they then use to conduct other attacks. If your computer is infected, and used to attack another computer, do you want that other computer to be able to snoop around yours? Or what if your computer is totally innocent, but someone else thinks you are committing a crime? And what if you aren’t tech-savvy, would you even know that your computer is being legally hacked? These are difficult questions, and a law that potentially legalizes some hackback activity should be clear as to how these issues will be addressed. And while ambiguity in this area isn’t a new problem, this seems as good a time as any to remove it.

(Section 4 also authorizes private companies to receive and share threat information from other private companies and the Federal Government. This section is difficult to comment on without the agency regulations explicating how it will operate, so I’m mostly skipping it.)

Section 5 – Private Sector -> Government Sharing

Section 5 is basically Section 3 in reverse, providing an official means for private companies to share vulnerability information with the Federal Government. Most of the specifics for this section are delegated to the Attorney General, but there are requirements for privacy and civil liberties protections, specifically retention limits and sanctions for government violations. There are other protections as well, like maintaining certain privileges and trade secrets (which typically fizzle upon breaking secrecy), but most of CISA is directing other agencies to make the more comprehensive rulebook, while it simply outlines some of the major concerns. So while it isn’t exhaustive, it isn’t designed to be.

The main criticism I’ve seen levied at this section is that it doesn’t prevent shared personal information from being used for law enforcement purposes. This is both true and complicated. CISA specifies what information received through the information sharing program can be used for, which does include several law enforcement purposes. CISA allows for shared information to be used to help prevent 1. an imminent threat of death, serious bodily harm, or serious economic harm; 2. a serious threat to a minor, including sexual exploitation and threats to physical safety; or 3. the purpose of preventing, investigating , disrupting, or prosecuting an offense arising out of subsection 1, or any serious violent felony, fraud and identity theft, espionage and censorship, and the protection of trade secrets.

So that’s quite a bit to unpack. Basically, if the government receives information relating to a serious crime, they are allowed to act on it. To some, this is an afront to the 4th Amendment, as this information seems to have been obtained without the usual restrictions on Government searches and seizures. Putting aside the 4th Amendment arguments, which are weak, this is largely a normative question. Should the Government be allowed to use information obtained for non-law enforcement purposes for law enforcement purposes? I tend to say yes, but there are legitimate concerns about the manner in which it will be done. Transparency for the program will be low, as most of the information shared will not be made publicly available; discriminatory use of the data also warrants serious concern (e.g. only investigating potential terrorist threats if the target is Muslim), and would be difficult to identify by outsiders given the aforementioned lack of transparency; and there is always the threat of law enforcement abusing the system to circumvent 4th Amendment protections. While CISA attempts to address these concerns, they are hard to guarantee, particularly to a cynical outsider.

While I can appreciate these concerns, I don’t think they are damning. Most are technically covered by other laws, or indeed CISA itself. Section 7 provides for oversight, which includes reports to Congress, as well as independent review by the Privacy and Civil Liberties Oversight Board. While we may quibble about frequency (yearly for the government; every 2 years for the PCLOB), I think this is a decent system. Secrecy is always troubling, but it is a necessary evil, particularly when dealing with cybersecurity, as total transparency would magnify each vulnerability immeasurably.

Section 6 – Immunity

Which brings us to Section 6, arguably the most controversial aspect of CISA. This section provides immunity to private companies for activities done in accordance with CISA. Granting legal immunity is never going to be popular, but in this case it is probably a necessary evil to facilitate information sharing. Since the program is voluntary (Section 8 covers this), CISA needs to make information sharing a good business decision for companies to engage in it, and this simply wont be the case if participating opens them up to liability. Basically, for an information sharing program to work, you can’t let sharers be sued for information sharing. And it’s worth noting that CISA’s protections aren’t unlimited, as it allows for liability in the case of gross negligence or willful misconduct, as well as for activity that isn’t authorized by the Act. This is a high bar, but it does allow redress in particularly egregious cases.

But I think the more interesting point is that the immunity granted in Section 6 only applies to monitoring and information sharing, and doesn’t apply to the defensive measures subsection. This certainly suggests that aggressive defensive measures, arguably the most troublesome activity CISA authorizes, will be more easily challenged in court. While consumers wont be able to complain about mere data sharing, data sharing is a comparatively minor problem. Monitoring, though more troublesome, still requires written consent for anything not on one’s own networks, so it does not authorize indiscriminate surveillance in the name of cybersecurity. And while CISA does not expressly state that individuals can sue over wrongful defensive measures, the structure of the act certainly could be interpreted that way, and this implication alone should temper the use of more aggressive defensive measures. 

Broader Thoughts

So is the bill good, overall? Apart from the ambiguity with regard to defensive measures, which should be clarified, most of the sections of CISA seem reasonable, and don’t strike me as warranting the concerns others have raised. My main criticism would be that I’m not sure the Act ultimately accomplishes much: while more formalized procedures for information sharing are certainly beneficial, most of the act only seems to clarify what we already knew, and doesn’t attempt to address the more difficult challenges facing cybersecurity. My concern is ultimately that by passing one cybersecurity act, Congress will check this off of their to-do list, and leave these other problems unaddressed. But judged purely on its own merits, this appears to be an appropriate information sharing act. 

Until next time.

-Scott

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s