Dispel Criminal Intent with Open Communication

White Hat Hacking


Responsible security professionals, pursuing legitimate goals, sometimes worry their actions will violate computer crime laws. Take for instance the Computer Fraud and Abuse Act. It is worded so broadly it could roughly be interpreted to punish unauthorized access to a computer which causes the computer owner a problem.

A recent study explores the potential that white hat security professionals could be prosecuted for probing a web resource without permission of the owner – such as running a vulnerability scanner like Nikto or otherwise testing a Web 2.0 application for security weaknesses. See the Inaugural Report of the CSI Working Group on Web Security Research Law, June 11, 2007.

Good Reason to Probe?

Sometimes reputable professionals have good reason to conduct these kinds of probes. They might be surveilling a phishing site that is stealing passwords from their client’s customers. Or they might be performing a public service to Internet users – in keeping with the time-honored practice by security researchers of testing popular desktop software for weaknesses.

Above-board security professionals can take a number of steps to minimize the risk of breaking the law. In order to commit a crime, a person must have intent to do something wrong. A powerful way to dispel “wrongful intent” is to openly communicate what you are doing and what the justification for it is.

One example: If you are aggressively probing a phishing site, then send or leave a message identifying yourself, saying you have reasons to believe the site is phishing and explaining you are running vulnerability tests, and so on.  The message constitutes an exculpatory record.

Announce Yourself in Advance?

Another example: If you are researching a popular Web 2.0 application for the purpose of informing and protecting the public, then do it in the open. Send a message to the site owner identifying yourself, describing the scope and limits of your research, and explaining that you act in the public interest, consistent with the established practice of independent testing of software applications. Give the site owner time to respond. And then blog about what you do and let the public see.

These suggestions stem from the general notion that transparency and open communication are the best means to prevent a good person from being mistaken for a crook.

I grant you, these suggestions are not without controversy. There is more to this topic than I have space for here. And you should not take anything I say in a blog as legal advice or a substitute for counsel from an attorney. We discuss these and related issues in the series of SANS courses I teach on IT security law.


--Benjamin Wright

Mr. Wright teaches the law of cyber investigations at the SANS Institute.

4 comments:

  1. I posted a different comment about the report here.

    I argue that although the Report is excellent work, it overstates the problem it detects.

    ReplyDelete
  2. I posted another comment here, in defense of existing computer crime law.

    ReplyDelete
  3. Here is a comment I posted on the blog of Jeremiah Grossman (a member of the CSI Working Group):

    [begin quote]
    Hello, Jeremiah. This is a complex and important subject, and I salute you and the Working Group for taking it on. When I say a web researcher can change the complexion of his case by notifying people before he acts, I'm not necessarily saying he gets consent [from the web site owner before he probes the site]. Giving someone notice of what one plans to do is not the same as getting their consent.

    One example: Suppose a respected group of researchers, acting with support of a responsible group like EFF, is concerned about a vulnerability common to major web sites. Imagine the group sends a message to the sites in question (and they publicize the message), saying a. we intend to inspect your site under these controlled parameters; b. we will publish our results; c. the reason we are doing this is to promote the public interest, consistent with the long-standing tradition of respected, independent security experts testing software applications; d. the identity and contact information for each one of us is XYZ.

    If the group sends this notice and then acts, it does not have explicit consent from the web site owners.

    However, advance notice like this makes the situation for these white hat researchers very different from the situtation for Cuthbert and McCarty (the two examples of convictions in this area).
    [end quote]

    ReplyDelete
  4. Here is another comment I posted about the CSI Report on Jeremiah Grossman's blog:

    Jeremiah: I wish to offer another comment about the Report. I do this not to throw stones at the Working Group, for which I have much respect. Instead, I seek to foster public dialog about the important topic the Report addresses. The Report says, "It is true that software security researchers can get tangled in legal snares if their research methods brazenly defy copyright law or the software vendor’s end-user licensing agreement. Yet those are not criminal offenses." Although I understand the educated spirit behind that statement, I'm uncomfortable with it. A EULA for a desktop application in fact might support a criminal prosecution. For example, the EULA for Adobe Reader says, "The structure, organization and code of the Software are the valuable trade secrets and confidential information of Adobe Systems Incorporated and its suppliers." Further, the Adobe EULA authorizes only limited uses of the software, and security testing is not one of them. California Penal Code Section 499c(b)(1) essentially says it is a crime to use a trade secret without authorization. I suspect a prosecutor could build a case that a tester who (with evil intent) probes deep into the Adobe code is using a trade secret without authority and therefore committing a crime under 499c. Now, I don’t think California prosecutors will commonly go after responsible security researchers investigating Adobe Reader installed on their own machines. However, I argue that the primary things that protect responsible security researchers (whether they are inspecting applications installed on their own machines or Web 2.0 applications installed on other peoples' servers) are their intent, motives and methods and their carefulness to avoid hurting other people. I argue the protection for respected security researchers does not come so much from a legal distinction based on who owns the hardware on which the application being tested is installed. . . . I am interested to hear what you and others think ('cause I don't know everything!). –Ben Wright, hack-igations.com

    ReplyDelete