Sunday, January 06, 2013

Security, Disclosure, Legality

I guess I'm late to this news, but I stumbled across the case of Andrew Auernheimer, who was convicted of one count of identity fraud and one count of conspiracy to access a computer without authorization, for posting to Gawker that AT&T had a data leak that allowed anyone to get information about a set of AT&T iPad users. A description of what occurred can be found for example in this Wired article.  I also recommend this opinion on the matter by Matt Blaze, or this take by Ed Felten.

This case hit home for me because, as you may recall, last year we had an entirely similar situation with Yelp.  We found that they were accidentally leaking personal user information.  We collected data to back our claims if needed, brought it to them, and they fixed it.  Moreover, when we told them this leak should be made public (after they had fixed it), they agreed to do so;  their blog post on the issue remains up.  As I mentioned at the time, it was an exemplary experience.  Now I feel even more so.

Did we handle it differently, by contacting the vendor first?  Yes.  And perhaps these two situations exemplify some of the differences between what some people call full disclosure vs. responsible disclosure.   But it's very unclear to me why what Mr. Auernheimer did -- finding a flaw and disclosing it in the manner he did -- would be considered illegal.  Or, perhaps more to the point, it's not clear to me at this point what the difference is between what he did and what we did, which worries me, because as far as I can see, we should not have been anywhere close to any legal line in how we dealt with a found data leakage.

Certainly we thought our responsible disclosure approach -- contacting Yelp -- increased the likelihood of good will and a good outcome.  And, in retrospect, we may have been depending on our status as university researchers.  A legal action against us would have led a lot of negative press for Yelp (I think), and we'd have a lot of support from the academic community.  I should emphasize, though, that I'm not clear that we would have gotten much help from our universities. I contacted Harvard legal, and they were very hands-off.  Examples of the wording in their response to us. (Note, this was going through another layer, hence the 3rd person "the researchers" wording -- it's not just legalese).
If the researchers move ahead to disclose this publicly, as they intend to do, they should understand that the discovery and announcement is something for which they are responsible in their individual capacities (and should not be held out as an activity done by or on behalf of Harvard).
If there is some liability that results from the discovery or their announcement of it, the researchers should understand that they could not look to Harvard to cover that liability.
There are, as I’m sure you know, laws that prohibit certain kinds of hacking. It’s important for the researchers to be very comfortable that they were not engaged in any activity that could be construed as posing under another name, unauthorized breaking into a site, etc.
In the end, I think we were depending on common sense -- we found a leak, we aimed to get it fixed, we wanted it announced afterwards, for the obvious motivations -- credit, and protecting others.  Auernheimer didn't go to AT&T first, but what he did does not seem completely outside the realm of common sense to me.  (I suppose this is the heart of the full disclosure vs. responsible disclosure debate.)  So how as researchers do we protect ourselves from felony charges?  How as a practical matter do we improve computer security in this legal environment, or how can we change the legal environment to improve computer security while maintaining researchers' rights?

Auernheimer is due to be sentenced in February, although the articles suggest he will appeal his case.

9 comments:

Harry Lewis said...

Here is another way of looking at this, not as something peculiar to computer securities law. It is evidence for the vagueness of federal statutes generally and the extraordinary power of federal prosecutors to convict people (or extract guilty pleas) almost at will. Harvey Silverglate wrote a good book a couple of years ago called "Three Felonies a Day" -- that is what a prosecutor said any one of us could be guilty of.

Stefan Savage said...

So there is one lesson from this about the power of the state to develop convictions where there is an interest to do so. This is indeed true to a first approximation.

However, I think there is a more interesting practical lesson here that "optics" matter. How you do your research and how you present it can be every bit as important as what you did. This is uniquely true in the security space where grey areas abound.

I still recall consulting a well-known legal scholar about some research we were doing that had potential DMCA implications. I had read up on the law, commentary about it, previous applications, etc and was expecting a set of discussions around these technical details. Instead, they said, "First rule? Don't piss them off... unless that's your actual goal." The essence of the point is that we were in a gray area and while there was a good argument that what we were doing was legal, that our risk position was dramatically enhanced if we didn't try to provoke a defensive reaction.

This is not really about responsible vs full disclosure in my opinion... this is about the kind of relationship you want to have with others. If you come across as someone who is entirely self-interested and who relishes the failings of the systems you analyze, then you shouldn't be surprised when they respond in a self-interested fashion and relish your failings.

Michael Mitzenmacher said...

Hi Stefan. I admit, I was hoping you'd comment on this -- since you're one of the (the?) leading expert in this area, I figure you'd have interesting stuff to day.

What worries me is I understand (and we obviously tried to follow) the "Don't piss them off" rule in our own endeavor, but to me the reason for this is because you don't what whatever private company you're having to interface with to send their lawyers after you. Even if what you do as a researcher is legal, anyone can send lawyers after you if they feel like it, so best (from my standpoint) not to make them feel like it -- they have more resources than I do.

Here Auernheimer was convicted of multiple felonies. I don't see where the "Don't piss them off" rule comes into play. This wasn't AT&T going after the guy (although perhaps -- I don't know one way or another -- they played a helpful part in the process) -- this was the government going after him. It shouldn't matter whether he was pissing people off or not. Indeed, the point of the law should be to protect individuals from greater powers like AT&T, even if the company is pissed off at him.

The question to me is whether he did anything wrong from a legal standpoint (or was this an erroneous judgment), and if so, does the corresponding law seem reasonable in context (the context I have in mind is obviously related to the sort of work I've done!). I at this point am not sufficiently informed on all the details of the case, but what I've read does not leave me feeling confident and assured.

Stefan Savage said...

MMy understanding is that this was a conviction under USC 1030 (i.e., the CFAA) which is notoriously gray. In the broadest possible reading of the statute (which revolves around the vague disjunction of "access that is unauthorized" or "exceeding authorized access"), you can't do jack to someone else's computer without their express permission or you are in violation. Now this reading is completely impractical since it could be taken to criminalize "pinging" a computer without asking "mother may I" first. However, while I hope we can agree that this case is absurd, where the line should be drawn is unclear.

I think we'd probably agree that if I exploit a buffer overflow vulnerability in your browser that this probably crosses the line. Presumably, you would have felt uncomfortable doing the Yelp research if it required you to inject code and modify control flow on one of their servers no? Well, what about something that isn't so low level? What about a SQL injection bug? Is that ok to do to someone's site? I think you'll find quite a few people who say no... and yet that's just a Web request no? Not so different from what's going on here.

The problem with the unauthorized/exceeds access standard is that we don't understand how to apply this concept to running code. No one is comfortable saying that authorization is only limited by what the code allows since then the buffer overflow attack is completely legal (since the code implicitly allows this). However, its crazy to set the standard to be about express user intent since how would wee establish if a given server owner had foreknowledge about all the things a given protocol could do... heck, I think we know that most computer users have no idea and couldn't even answer those questions post hoc. Bottom line, we don't know how to do this mechanically so inevitably we turn to other factors such as context, intent, and so on.

So... what to do?

Stefan Savage said...

[continued]

Well, I think the law could be better written. But at the same time I don't expect laws to provide precise mechanistic yes/no decisions for everything we might envision because they're meant to be flexible and cheap to implement. Precedents might help (although this case I think does not help usefully unfortunately). In the meantime, if you're operating in this grey zone (and I have) then you have a responsibility to act sensibly. Both for yourself (so you're not getting sued for no good reason, or worse, brought up on criminal charges) and for the community so you don't create a situation that could define a bad precedent. Again, given the lack of a bright line standard how does one manage risk? Well, one way is to make your intent clear and unambiguous. The optics of going after a researcher who is transparently trying to help improve security is bad -- either for corporate counsel or the USG. This is a place where your Harvard affiliation undoubtedly helps some BTW.

I think the question of this being brought by the government vs AT&T isn't that relevant actually. Since this is a 1030 case it was going to be criminal (there is a small civil liability provision, 1030(g), but that's not what's going on here) and hence was going to be brought by the state. However, I'd be just _shocked_ if it didn't start with a complaint from AT&T. I can't see any AG bringing this case independently and without strong support from the company.

Again, I'm not arguing that this guy should have been convicted. It seems like an overkill use of the law and I hope this gets taken into account in sentencing. What I am saying is that its a mistake for security researchers to feel that they can do anything they want. There is no special research "get out of jail free" card. The legal standard for what is fair game and what is over the line is unclear, everyone should understand that. If you're "pushing the envelope" as it were then it behooves you to act responsibly while doing it. This generally means trying to consult legal advice first, it means doing what you need to do to achieve the end and not more, and -- frankly -- it means not acting like an asshole about what you've accomplished.

Harry Lewis said...

Well, that all seems to be sound, practical advice. And yet I am really not sure where this leaves researchers. MM, for example, did consult Harvard's legal counsel (response, in brief: be careful, you are on your own), does seem to have done what he needed to do and no more, and didn't act like an asshole while doing it. And it's a fine thing that he is a serious researcher at a major university with a good personal reputation. None of the above seems to have been true of Mr. A. So the practical advice SS offers is consistent with the disparate results in these two cases. So as a lesson to students about how to pursue research and advance in the field, what SS suggests seems to have been confirmed.

But we are also teaching students how to be citizens, and I don't think it's a great lesson for all professors all the time to say to students, "Don't piss off the government, or its big corporate bedfellows, if you want to get ahead in the world." A moral question remains of how, in the last institutions on which American society can count for the unvarnished truth, we want to teach students always to self-censor and to be docile.

It remains a serious question where to make the tradeoff between acting out of our personal best interests, and risking a hormonal reaction from some federal prosecutor, or even sticking up for someone else who has run that risk and is paying a price for it. As citizens we have an obligation to point out that whether the government is pissed off should have nothing to do with whether somebody is a felon. For those who have tenure, I think that is part of our social contract: In exchange for not being fire-able for saying stuff that may piss people off, we have an obligation to say stuff that may piss people off when it is truthful and socially important.

Even restricting our field of view to academic research: if academics don't defend some of these edge-case folks, the edge will move, and the SS advice won't be good enough any more.

Stefan Savage said...

Hi Harry,
I think it is unfortunate that Harvard’s counsel was not more accommodating (we have had far better experiences at UCSD) but these kinds of differences abound in the absence of clear guidance. Similarly, we have no normalization of ethical rules in this space and one Universities human subjects review board may routinely forbid research methods that are de rigeur elsewhere.
However, I think you took the wrong lesson from my note – which indeed was meant to be practical advice. The less is not don’t piss people off, but don’t piss them off for no reason. Similarly, if you need to break unjust laws in clear pursuit of moral purposes – e.g., Selma lunch counter sit ins – then by all means one should do so. I do not think that is the situation here.

I think there are two separate questions at play here. One is about the law itself, which is unfortunately vague and was written at a time when it seems unlikely the authors envisioned the degree to which complex public inter-connected online systems would be the rule of the day. However, given this complexity I think we would tend to agree that from a pure “golden rule” standpoint that context and consequences are actually the key issues here and not technical details such as “Hey, you can’t criminalize looking up a URL”. Indeed, if that URL request had encoded a buffer overflow or SQL injection, and/or if the damage had been one in which money had been stolen or individual lives had been affected significantly, I think the fact that the mechanism of delivery was a URL request would not stay our indignation.


The second issue is about prosecutorial discretion. The reality is that the government will never bring even a small fraction of the potential cases under CFAA that could be brought. They are expensive, they are complex, and they can be challenging to win and, with few exceptions, they rarely provide the feather in the cap that provides a career advancing move for a prosecutor. Thus, we hope that in its use of discretion that the government will focus on the most important cases that could be brought. Was this the most important case? I think not. The actors were only barely premeditated, damage was slight and I don’t think there are responsible claims that this pair was planning greater harm that was thus averted. I think it is on this basis that the case is more unfortunate than its legal basis. However, from a it also seems very understandable why this case happened – it was public and winnable. It was a case with a motivated complainant, the defendants were available and unsympathetic, and the evidence was comprehensive and not in contention. To sum up cynically, prosecutors like to bring cases they can win.
So do I think weev should go to jail? No. But do I think what he did was ethically sound? Not particularly. Was it illegal? It’s clearly in the grey zone. What I think is dumb about the whole thing however, is that it could have been completely avoided. I think there is a sentiment in the computer security community that we are untouchable and it leads people to make very bad choices. The purpose of my note was to remind people that the cowboy mentality is not a self-serving one.

Stefan Savage said...

I’d like to bring up a tangential issue here, but one that I think is related. As we have migrated systems of interest from clients, to servers, to cloud-hosted servers, we have made the job of vulnerability research legally grayer and more complex. For example, if I want to look at the security of Microsoft Windows I can buy a copy and poke at it in the privacy of my own home. If I want to look at the security of Facebook, I can still work from my own home, but I need to interact with their servers – which I do not own (and am potentially subject to their EULA). If I want to look at the security of Amazon EC2-based services, well I need to operate from their servers and interact with their servers (and in so doing enter into an explicit contract with them). In each case I am increasingly more bound by contractual agreements that may broadly prohibit a range of investigations (even around interacting with onee’s own data). As a consequence, it is become increasingly difficult to do in-depth security research around production Web services. While I have no doubt in the quality of the security researchers at Google, Amazon, Microsoft, etc… I think this is an unfortunate corner for us to have turned. We’ve had similar problems looking at criminal malware. In the old days you could get a virus and look at it on your computer. But with today’s botnets, doing similar investigations inevitably mean interacting with servers run by the criminal actors, many of which may in fact be owned by innocent third-parties. Once again, the gremlin of what “authorization” means rears its head. As a result we have two broad factions that have emerged: the faction who says “well, I give up” and says that you simply can’t look at such things because they may violate the letter of the law (under some interpretation) and the faction that says “I don’t care, I’m doing good” and just pushes on…. This is not a particularly good situation and is likely to lead to more cases like the one under discussion.

Tor Ekeland said...

As Andrew Auernheimer's lawyer I can tell you for a fact that it was the FBI that instigated this prosecution. The alleged hack, if you can call it that, took place between June 5-8, 2010, and on June 10 the FBI was telling AT&T that they were pursuing a criminal investigation. AT&T took no steps to sue civilly, and at trial we even introduced an email from one of AT&T's lead investigators saying he didn't think a crime had been committed. The government had a field day with my client's controversial nature. At the end of the day, though, anyone in your field, and computer users in general, should be very concerned with the theory of unauthorized access that prevailed at trial, because it essentially is that unauthorized access occurs whenever a computer owner says it does, despite a complete lack of notice or security. We always thought that the chances of prevailing at the trial level were slim, and have had an eye on appealing every step of the way. This is going up to the Third Circuit and perhaps beyond, given the circuit split and hopelessly circular definitions of what constitutes unauthorized access in the federal courts. Anyhow, I found the above comments interesting and informative.