Cybersecurity Conference an Overwhelming Success

Computer security researchers, software vendors, and legal experts come together to discuss how vulnerability disclosure can best promote security.
  Logo
 

Cyber Security, Privacy and Disclosure

What is significant about the seven potential security holes in Internet Explorer that Chinese researcher Liu Die Yu recently discovered is not so much the holes themselves as the way in which Yu made the world aware of them. He released details of the flaws on the public Bugtraq e-mail list, meaning that anyone with an Internet connection was able to immediately read his findings. Wary of the public release of such sensitive information, Microsoft Security Program Manager Stephen Toulouse responded in the following way to internetnews.com: “We believe the commonly accepted practice of reporting vulnerabilities directly to a vendor serves everyone’s best interests, by helping to ensure that customers receive comprehensive, high-quality patches for security vulnerabilities with no exposure to malicious attackers while the patch is being developed.” In other words, Toulouse reiterated a theme that had served as the locus of a day of heated, productive debate at Stanford Law School one week earlier: how, when, and where independent researchers should disclose potentially damaging information regarding the vulnerabilities of a piece of software.

On November 23, twenty-four distinguished panelists from academia and business gathered at the Center for Internet and Society’s Conference on CyberSecurity, Research, and Disclosure to discuss the relationship between computer security, privacy, and the disclosure of information about security vulnerabilities.

Jennifer Granick, Executive Director of the Center for Internet and Society, argued that computer security is a democratic concern: that it should be about holding governments and companies responsible for the software they provide, and not just about fixing a handful of bugs.

At the beginning of the day, it appeared that all discussions would come down to a fundamental disagreement between two groups of people. Members of the “full disclosure” camp would argue that the prompt and public disclosure of security vulnerabilities encourages the rapid, industry-wide development of patches for common security holes and therefore benefits everyone interested in effective risk management. Meanwhile, skeptics would consolidate their opposition to this idea by pointing out that the public availability of such vital information endangers companies and agencies by contributing to the knowledge base of wrongdoers.

But as the day wore on, it became clear that few panelists or audience members fit neatly into either of the above groups. Instead, most theorized that it was important for a hacker or researcher to notify a software company of a potential vulnerability before releasing that news to the public. At the same time, many, including Chris Wysopal (of @Stake) acknowledged that if a software vendor proves unresponsive to the “private notification process,” as is often the case, then the independent researcher may have good reason to publicly release her proof-of-concept code even if a patch for that code has not yet been prepared.

The conference turned on a series of questions that forced audience members and panelists alike to delve ever deeper into an embedded set of issues. There were few questions with self-evident answers, and few possibilities for universal solutions. Still, many came into the conference with similar goals. Jennifer Granick, Executive Director of the Stanford Law School Center for Internet and Society (CIS), led off the first session with a question that crystallized the impetus behind all of the day’s discussions:

When does disclosure best promote security and minimize exploitations, and how much information should be disclosed at a given point in time, and to whom?

Later questions explored the motivations of the disclosers. In his session, CIS Fellow Chris Sprigman asked panelists how to adequately compensate independent researchers for the valuable service they provide to vendors and customers, while simultaneously encouraging them to report vulnerabilities in a responsible fashion. He then grilled panelists who were interested in the commercialization of security information both from a hacker’s perspective (represented by Simple Nomad of the Nomad Mobile Research Center) and from a vulnerability coordinator’s perspective, asking them whether or not the reporting of security holes should be a noncommercial academic/governmental function.

Many of the afternoon sessions focused on practices and policies that might better facilitate communication between vendors and researchers. Panelists dealt with practical questions about what researchers and vendors should actually do when faced with the discovery/disclosure of a security hole. They also contemplated whether a proposed set of best practices needed to be adapted to the needs of small vendors, ISPs, and website owners.

Speakers who concluded that it was essential for programmers themselves to adhere to secure practices—so that security vulnerabilities would never arise in the first place—found themselves faced with the question: How do you motivate the vendor to release more secure software without crippling innovation?

Lauren Gelman, Assistant Director of the Center for Internet and Society, and moderator of the panel on ‘Policies and Practices Encouraging Patch Installation’ at the Cybersecurity conference.

Matt Blaze, from AT&T Labs, summed up this problem with an intriguing metaphor from another realm of virus-fighting. “Antibiotics helped head off the problem of corpse disposal. Similarly, good secure programming practices can help head off the problem of intrusion recovery. However, just as we don’t completely understand physics, or biology, or other aspects of life, we don’t yet completely understand computer code,” said Blaze.

The conference ended with a discussion of the liabilities that researchers and vendors alike might face after the disclosure (or lack thereof) of security vulnerabilities. The final panel asked: What role should legal rules play and how can the law help or hurt security in the area of vulnerability disclosure? One almost universally affirmed response to this question was to point out that it will be a very challenging undertaking for judges to work through questions of liability and responsible disclosure. Without any doubt, a good number of the individuals who attended the November 2003 conference on CyberSecurity, Research, and Disclosure will play a key role in identifying a reasonable legal framework for the exploration of these issues.

******************************************************

Engaging in an act of full public disclosure itself, the Center for Internet and Society has posted an unabridged audio recording of the day’s proceedings on its website. To further investigate the ideas broached in this article, please check out the conference archives at http://cyberlaw.stanford.edu/security.

Also, if the topics covered at the Conference on Cybersecurity, Research, and Disclosure are of interest to you, please consider registering for the Stanford Law School symposium on Securing Privacy in the Internet Age, which will take place on March 12–13, 2004. The topic of the symposium is: What legal regimes or market initiatives would best prevent the unauthorized disclosure of private information while also promoting business innovation? Register for this conference at http://cyberlaw.stanford.edu/privacysymposium.