Lecture notes by
Lynette I. Millett
Revised by Jed Liu
Computer security is just a small part of the real problem. And we will spend the rest of the semester studying that small part; today I'd like to look at the bigger picture: Network information systems (NIS) are becoming more and more prevalent. Examples include: the public telephone system, the power grid, etc. At the same time these systems are not trustworthy or dependable. And, the alarming trend seems to be for people, companies, and governments to be more and more dependent on them. Needless to say, as computer scientists we are in a position to help.
For the purposes of our discussion we define trustworthiness in a NIS to occur when the system does what is intended by the user, in spite of:
In other courses, you may have been taught to tackle a problem by solving it in pieces. In building a trustworthy system, however, such a divide-and-conquor approach is not enough: trustworthy components do not necessarily result in a trustworthy system. For instance, if two machines are kept in two physically secure locations, but communicate over a network, a wiretapper could potentially compromise the machines. This is known as the "weakest link phenomenon."
Building software to be trustworthy is qualitatively different than other more typical software development issues. Most of your experience in building software has been concerned with functional requirements--what outputs must be produced for given inputs. Requirements involving attacks, on the other hand, are decidedly non-functional. We are not told what attacks to expect so the specification of the problem is inherently incomplete. By their very nature attacks are unpredictable and should not be formalized. Any attempt at formalization could rule out possible attacks and, therefore, be incorrect.
System security is an intrinsic property of the system, and can often described in terms of "negative properties" (e.g., this system cannot be broken into). Such negative properties are almost always difficult to prove correct. The problem is that we have an open system: we are unable to predetermine the setting in which the system will reside. Because of this, we have to be careful to not make assumptions when designing or implementing a system, as such assumptions, if incorrect, may become a system vulnerability.
We said that network information systems were untrustworthy and also becoming more and more prevalent. It may be useful to know why such systems are becoming more prevalent and what is driving that process.
In the private sector, organizations today are driven by the need to operate faster and more efficiently. Profit margins are thinner and expectations are high. (For example, consider "just in time" manufacturing wherein inventory and material are not warehoused but instead shipped to arrive exactly when needed.) In this kind of environment, timely information (who needs what and when?) becomes essential, thus the need for network information systems.
In the pseudo-public sector there is a new climate of deregulation. Less regulation produces competition, which produces a need for low prices. Companies thus need to lower expenses, and one way to do this is to decrease excess capacity (power reserves, bandwidth, etc.). Lower excess capacity results in the need for finer control over the existing capacity, which requires a good information system. Lower excess capacity also results in less trustworthiness by creating a less stable system. Excess capacity can, in some cases, take up the slack in the event of a failure. With less "slack" it becomes more likely that a "small" failure could have large repercussions.
Also in an effort to cut costs, companies often look to outsourcing. For example, it is often the case that janitorial tasks are outsourced to other companies. This can lead to less trustworthy people having physical to access systems.
Another result of the need to lower expenses and attract customers in deregulated industries is the introduction of new and complicated features to existing services (e.g., in the telephone industry consider things like call-forwarding and *69). More features lead to more complexity in the system, which in turn results in more opportunities for unpredictable behaviour.
In short, it seems as if we are heading towards a situation in which there will be many untrustworthy network information systems. This problem will need to be fixed, but one might ask: How bad is this problem? What are its dimensions? The consequences of untrustworthiness include:
The next obvious question is, given all of the above, why are network information systems not built to be trustworthy? One answer is that it is not clear in all instances how to do it. But the real reason is costs, both direct and indirect. The software and systems market today is dominated by COTS (commercial, off-the-shelf) products. There is a huge economy of scale involved in building COTS components. Imagine someone in charge of integrating a large system which needs to be completed on-time and on-budget. It is faster and cheaper to use COTS components, and this also reduces project risk. Another incentive of COTS is interoperability. Upgrading from one version of COTS software to the next is usually straightforward and the easiest thing for users to do, even though there may very well be better products available. Finally, there is usually a large existent software base for COTS.
Those who provide COTS products (such as Microsoft and Intel) know that the market prefers features over trustworthiness. It may be that the market and individual consumers are not really conscious of this, but it seems to hold just the same. It's generally not clear how the trustworthiness of a system would be measured; product features, in comparison, are tangible and observable. Additionally, many institutions have a disincentive for notifying the public of break-ins, making it difficult for the consumer to build a mental model of trustworthiness.
A second factor leading to the lack of trustworthiness is the fact that in the COTS market, the rule of thumb is that the earliest entrant to a market usually gets the largest market share. In other words, time to market dictates success. Implementing trustworthiness increases the time to market. It requires extra functionality, fault tolerance, better debugging, ways to provide assurances, and so on, which all add to development time. In short, COTS producers have an incentive to ship half-baked products, get the consumer to do the testing, and to release patches later.