NSF/IARPA/NSA Workshop on the
Science of Security
The Claremont Resort, Berkeley CA
Monday, 17 November - Tuesday, 18 November 2008

Register | Agenda | Breakouts | Questions | Participants | Venue Information

Starting Questions

One goal of the workshop is to identify important questions about computer security that can be answered scientifically, as well as to understand the limits of what can be understood scientifically about security.

Below are some examples of the types of questions we have in mind. This list is not meant to be complete and many of the questions here are not yet refined to the level needed for a scientific question, but instead to provide some insight into the types of questions we hope to define and pursue. Note that in many cases the answer may be there is no scientific way to address a given question; in such cases, a strong negative result is still valuable in framing the limits of a particular direction as well as suggesting alternatives that may be answerable.

Attack Models. Modern cryptography has developed a set of formal attack models that precisely describe attacker capabilities and enable formal reasoning about the strength of cryptographic algorithms. Are there analogous formal attack models for computer systems? Can we use them to reason formally about the resilience of systems to attackers with different capabilities?

System Resilience. Given a system P and an attack class A, is there a way to:

Comparability/Metrics. Given two systems, are there meaningful ways to compare their security. For example, Refinement. Formal methods have developed refinement techniques where correctness properties can be reasoned about as abstract designs are transformed through successive steps into concrete implementations. Is there a way to go from a design to an implementation with assurance that the implementation preserves security properties of the design?

Program Analysis. Given a program, what can program analysis determine about the security properties of the program:

Composition. Given a set of components with known properties, is there a way to assure and reason about security properties of a system composed of these components? Is it possible to design components that behave securely regardless of how they are composed?

Partial Trust. Current approaches attempt to divide systems into completely untrusted components and a minimal trusted computing base (TCB) on which all security properties depend. It seems too difficult in most systems to make the TCB small enough to be completely secure, but also overly weak to assume nothing about other components. Are there ways to design systems with less absolute requirements for components, and a many-gradation division between trust levels?

One of the outcomes of the workshop will be a refined list of questions, identifying important scientific questions about computer security and what form a satisfactory answer might take, as well as preliminary ideas for approaches for addressing each question.