Science of Security
The Claremont Resort, Berkeley CA
Monday, 17 November - Tuesday, 18 November 2008
Register | Agenda | Breakouts | Questions | Participants | Venue Information
Starting Questions
One goal of the workshop is to identify important questions about computer security that can be answered scientifically, as well as to understand the limits of what can be understood scientifically about security.Below are some examples of the types of questions we have in mind. This list is not meant to be complete and many of the questions here are not yet refined to the level needed for a scientific question, but instead to provide some insight into the types of questions we hope to define and pursue. Note that in many cases the answer may be there is no scientific way to address a given question; in such cases, a strong negative result is still valuable in framing the limits of a particular direction as well as suggesting alternatives that may be answerable.
Attack Models. Modern cryptography has developed a set of formal attack models that precisely describe attacker capabilities and enable formal reasoning about the strength of cryptographic algorithms. Are there analogous formal attack models for computer systems? Can we use them to reason formally about the resilience of systems to attackers with different capabilities?
System Resilience. Given a system P and an attack class A, is there a way to:
- Prove that P is not vulnerable to any attack in A?
- Construct a system P' that behaves similarly to P except is not vulnerable to any attack in A? (This requires a clear notion of what similarly means.)
- Can we define formally what more secure means?
- Are there meaningful quantitative measure of the security of a system?
- What scientific principles can be used to determine if system A is more or less secure than system B?
- What scientific principles can be used to determine if system A is more secure than system f(A) (where f is some function that transforms an input program or system definition)?
Program Analysis. Given a program, what can program analysis determine about the security properties of the program:
- How is program analysis for security different from program analysis for bug finding and program analysis for correctness?
- What are the fundamental limits of each dynamic and static analysis with respect to establishing security properties? How can these limitations be overcome using hybrid analysis or non-universal programming languages?
Partial Trust. Current approaches attempt to divide systems into completely untrusted components and a minimal trusted computing base (TCB) on which all security properties depend. It seems too difficult in most systems to make the TCB small enough to be completely secure, but also overly weak to assume nothing about other components. Are there ways to design systems with less absolute requirements for components, and a many-gradation division between trust levels?
One of the outcomes of the workshop will be a refined list of questions, identifying important scientific questions about computer security and what form a satisfactory answer might take, as well as preliminary ideas for approaches for addressing each question.