Principles

There is no such thing as "perfect security". That leads us to reflect on the approach we take to security:

An emerging new approach is to attempt to treat security similar to how we treat public health. We can educate the public about it, disseminate treatments (such as vaccines), and track outbreaks. This approach to Cybersecurity as a Public Good was articulated recently [Mulligan and Schneider 2011].

Most of the rest of this course concentrates on prevention. Here are underlying principles for building secure systems that prevent attacks. We'll continue to see many examples of these throughout semester, so don't worry if they seem a bit abstract now. Most of these principles were first articulated in [Saltzer and Schroeder 1975].

ACCOUNTABILITY: Hold principals responsible for their actions. This is a principle behind physical security, and it holds for computer security, too. Consider a bank vault. It has a lock, key(s), and a video camera:

In the real world, we don't make perfect locks or keys or cameras. Instead, we do risk management. We buy insurance. (It's cheaper than building perfect locks, etc.)

Mechanisms for accountability are separated into three classes:

Together these are known as the Gold Standard [Lampson 2000], because they all begin with Au, the atomic symbol for gold. Use these terms carefully! People frequently confuse authorization and authentication.

COMPLETE MEDIATION: Every operation requested by a principal must be intercepted and determined to be acceptable according to the security policy. The component that does the mediation is called a reference monitor. Reference monitors should be tamperproof and transparently correct. Time-of-check to time-of-use (TOCTOU) attacks exploit vulnerabilities arising from failure to adhere to this principle.

LEAST PRIVILEGE: A principal should have the minimum privileges it needs to accomplish its desired operations. Experienced system administrators know not to login as root for routine operations. Otherwise, they might accidentally misuse their root privileges and wreak havoc. Likewise, a web browser doesn't need full access to all files on the local filesystem. And a web front-end doesn't need full write access to a database back-end for most of its operation.

FAILSAFE DEFAULTS: Presence of privilege, rather than absence of prohibition, should be basis for allowing operation. It's safer to forget to grant privilege (in which case a principal complains) than to accidentally grant privileges (in which case principal has an opportunity to exploit them). For example,

SEPARATION OF PRIVILEGE: There are two senses in which this principal can be understood: 1. Different operations should require different privileges. 2. Disseminate privileges for an operation amongst multiple principals. (Separation of Duty)

Regarding the first sense, in practice, this principle is difficult to implement. Do you really want to manage rights for every object and operation and principal in a software system? There's millions of them—you'll get something wrong. So we naturally do some bundling of privileges.

Regarding the second sense, prevention of large (potentially catastrophic) fraud and error is the goal. Two bank tellers might be required in order to open a vault or disperse a large amount of cash. Two officers might be required in order to launch a missile.

DEFENSE IN DEPTH: Prefer a set of complementary mechanisms over a single mechanism. Complementary means

For example, your bank might use both a password and a hardware token to authenticate customers. Your apartment building might have multiple door locks and a security system. And your university IT department might use firewalls and virus scanners to prevent spread of malware.

ECONOMY OF MECHANISM: Prefer mechanisms that are simpler and smaller. They're easier to understand and easier to get right. It's easier to construct evidence of trustworthiness for small, simple things. In any system, there's some set of mechanisms that implement the core, critical security functionality hence must be trusted. That set is called the trusted computing base (TCB). Economy of Mechanism says keep the TCB small.

OPEN DESIGN: Security shouldn't depend upon the secrecy of design or implementation. Assume the enemy knows the system. For example, assume the enemy knows the encryption algorithm, but not the key. Or, assume the enemy knows the model of a lock, but not the cuts made in the key. In cryptography, a similar idea is known as "Kerckhoffs's Principle." Non-secrecy is frequently violated by neophytes. Note that there's nothing wrong with keeping a design secret if security can be established by other means. That's just defense in depth. The opposite of this principle is "security by obscurity".

PSYCHOLOGICAL ACCEPTABILITY: Minimize the burden of security mechanisms on humans. Although it's rarely possible to make that burden zero, it shouldn't be too much more difficult to complete an operation in the presence of security mechanisms than it would be in their absence. Otherwise, humans will be tempted to create end-runs around those mechanisms.

Exercises

Read the abstract and introduction of The Security Architecture of the Chromium Browser (Barth et al., 2008).

  1. Identify how each of the following principles is manifest in the design of Chromium: Separation of Privilege, Complete Mediation, Least Privilege, Open Design.

  2. We previously defined a threat as a motivated, capable adversary. In the Chromium threat analysis, what are the motivations of the threat? Are there any motivations the threat is assumed not to have? What are the capabilities of the threat? Are there any capabilities the threat is assumed not to have?