Lecture notes by
Lynette I. Millett
Revised by
Borislav Deianov
Erlingsson, Ulfar and Fred B. Schneider. SASI Enforcement of Security Policies: A Retrospective. Proceedings of Workshop on New Security Paradigms 1999 (Ontario, Canada), Sept. 1999, ACM.
Almost all of the mechanisms for security that we have discussed this semester are EM mechanisms, including access control, and policies such as "service is available within three seconds." However, the policy "eventually service is available" is not EM-enforceable. We noted in the last lecture that information flow is not EM-enforceable, either. The properties of security policies listed above encapsulate what are known as safety properties. All safety properties can be characterized by automata. Here is an example of such an automaton.
Security automata can provide the basis for an EM-mechanism. Each step (state, instruction) in the execution sequence is represented by an input symbol. Each symbol is sent to the automaton before execution of the target system is allowed to continue. Two mechanisms are required in implementing this architecture: input dissemination--identifying the next input symbol and sending it to the automaton, and transition--determining whether a transition can be made on that input symbol.
In the worst case, we need to examine every machine language instruction before deciding that execution of a target system can continue. This would be very expensive. Practically, we only need to send instructions that correspond to input symbols to the automata. We can partition the instructions into two classes: those interesting to the automata and those not.
For example, as we have seen in UNIX, all security is attached to kernel operations (reads and writes on files). In this setting, input dissemination is already implemented by the hardware (by traps). Operating system security based on kernel operations can be accomplished very efficiently. Another example is segmentation and virtual memory. In this case, we need to forward only those input symbols that might cause automaton transitions, and careful initialization of segment and page tables can cause to traps to coincide with input-symbol production.
One problem that arises is possible differences between (informal) security requirements and their formal specification. For example, the requirement may be that no messages be sent after a read occurs. The formal specification becomes "no read after send." However, this disregards the fact that messages can be sent by means other than 'send.' Similarly, a requirement might be "no division by 3." However, what should be done with expressions like: (3*x)/9? Resolving these issues is difficult since there is no mathematical way to go from an informal description to a formal policy specification. We can only gain confidence in our specification by deriving expected logical consequences of it or by testing.
Another consideration is what we achieve by stopping the execution of the offending program. For example consider the following policy: a user can only access a file for t seconds. When t seconds have elapsed and the policy is about to be violated we can interrupt execution but the user still has access to the file -- i.e. the policy is still violated. We have to be aware of our limited control over the system; in this particular example we cannot stop time.
The policies discussed above are restrictive. Can we get a more general mechanism? Suppose we encode the security automaton as a boolean-valued subroutine. If we can detect input symbols by examining the source code, then we can insert calls to this subroutine where appropriate. This is a more general instance of SFI.