The Early Years of Academic Computing

Campus Networks

The starting point

In October 1977, while on my way to be interviewed at Dartmouth, I spent some time at Cornell and contacted the computer center to ask if somebody could discuss Cornell's computing with me. By a spectacular piece of fortune my guide was Douglas Van Houweling who later became one of the leaders of networked computing. As we walked around the campus on a beautiful fall day he described the need for computer networks in universities.

The concept of networking was not new. Universities already connected large numbers of terminals to central computers. A common architecture was to run groups of asynchronous lines to a terminal concentrator and a higher speed, synchronous line from there to the central computer. The lines might be hard wired or dial-up telephone lines. IBM had an architecture known as SNA, which provided hierarchical ways to connect terminals and minicomputers to a central computer, and Digital had built its Decnet protocols into the VAX and Dec 20 operating systems. The ARPAnet was well established among a privileged group of researchers, and Xerox had begun the experiments that eventually led to the dominance of Ethernet.

Doug's insight was to realize that these activities were not isolated. At Cornell, departmental computing centers were springing up independently from the main computing center. Very soon they would have to be connected together. Universities would need to have networks that covered the entire campus and connected every type of computing device. They had to use protocols that were not tied to any specific vendor.

Campus strategies

Campus Networking Strategies

The photograph shows the second of the three EDUCOM books about campus strategies. The book about networking was particularly well-timed. Ten universities, large and small, reported on their mixtures of terminal concentrators, SNA, Decnet, Wangnet, X.25, AppleTalk and many more. In the larger universities, the mixture of networks was proving a nightmare. Well-funded areas of the university might have several disjoint networks, while others had none. Providing gateways between networks sounded fine in theory but never seemed to work in practice. Network management was rudimentary and trouble shooting was expensive. Yet when the chapters were assembled it emerged that, with one exception, the universities had independently decided on the same solution. The strategy was to create a homogenous campus network and to use the Internet protocols for the backbone.

I was responsible for two campus networks. The similarities and differences between them reflect the different priorities and resources of the two universities. At Dartmouth we built a campus network that emphasized moderate performance at a low cost. Because we started early, the problem of incompatible protocols was never as severe as at other universities and the campus-wide adoption of Macintosh computers allowed us to use AppleTalk as a default protocol. Later when I went to Carnegie Mellon we built a much more powerful network. The university recognized the strategic value of the campus network and provided a very substantial budget. With generous support from IBM and with collaboration among many groups at the university, we built a TCP/IP network and connected a wide variety of computers to it. Both networks proved successful and their successors are still in operation today.

Topology and wiring

Networks are expensive. In many universities, although the computing directors and leading faculty members recognized the need for a network, it took years to persuade the financial administrators that universities needed to build them. Even at Carnegie Mellon, part of the justification for wiring the campus was the promise of a much improved telephone system.

A crucial decision was the topology of the campus wiring, within buildings and between them. With several options to choose from, a university could easily build an expensive network that went out of date fast. Initially the leading candidates were variants of a bus. Early Ethernet ran on a 50 ohm coaxial cable. This would be snaked through a building and individual devices clamped onto it. IBM had a token ring architecture. AppleTalk used a variant of a bus in which devices were daisy chained together using special cables. An Ethernet bus or a token ring were possibilities for the backbone between buildings, but another candidate was to use 75 ohm video cable with broadband modems.

A bus appears appealing, but has some serious difficulties in practice and the topology that emerged was a star shape. Within buildings, communication lines were run back to a building hub, and from there lines were led to a central hub. Ethernet and other protocols were modified so that they could run over these lines. The star shaped configuration has many practical advantages. If part of the network becomes overloaded, extra capacity can be installed by upgrading the equipment in selected building hubs. Almost all trouble shooting can be done at the hubs. On a bus, when there is a problem, it may be necessary to visit every device.

Eventually star shaped Ethernet over copper wires became the standard at almost every university and the Internet protocols squeezed out the vendor networks, but for many years we had to support a variety of interfaces and protocols. A later section describes the detailed choices made at Dartmouth and Carnegie Mellon.

Running wires within buildings was a major expense and the choices were not easy. At Dartmouth, we made a typically pragmatic decision. We decided to use cheap copper wire within buildings, which was easy to install, expecting that it would have a limited life. Fortunately, when the university became interested in our efforts, the trustee who was asked to look at the network was an engineer and he agreed with our approach. However, ten years later, when the time came to renew the wiring, I have heard that this rationale had been forgotten by the new administration. A few years later, Carnegie Mellon made a different decision. The network that we installed there used IBM's cabling system. This was designed to provide a choice of high quality networking options for many years into the future, for both data and voice communication. It was expensive to install but provided excellent service.