CS414 Review questions: these are a superset of what you have to know. these are just practice questions; they do not necessarily reflect the exam

Most of this material is taken without permission and shamelessly from one of Margo Seltzer's webpages.

Short answer questions:

(B) The networking wizards at Siemens-Nixdorf have finally upgraded their network connection between the US and Germany. They claim that they've banished congestion difficulties stemming from the high speed links in the US being gatewayed to Germany over 2400 bps modem link. They have finally gone out and purchased a T1 connection between their New Jersey office and their Duesseldorf office. They claim that since the 6 T1 lines which converge on their New Jersey office from other offices in the US and the T1 line between New Jersey and Germany all support the same data rate, they've forever banished their congestion woes. Do you agree? Why or why not?

Nope; still have 6 T1's feeding 1 T1.

(C) The Cluless storage company is having problems with their loading docks. The trucks get unloaded faster than the cargo gets moved into storage, so the docks fill up and the unloading crews have to wait. Docks fill up in two situations: (a) every Monday 8a-11a, and (b) from wednesday afternoon onwards (people have to work over the weekend to unload the docks). A consultant has suggested that they double the size of their loading dock. Will this solve their problem? Why or why not?

Situation (a): If the problem is just burstiness, a larger dock could help.

Situation (b): No. Arrival rate is larger than service rate; doesn't matter how big buffer is.

(G) Our friends from the Webuildem-Ufixem have come up with a new page replacement algorithm. They claim it does better than LRU. Do you believe them? Argue convincingly why or why not?

No, since we can't predict the future. Informal proof:
If you need to replace a page and you pick anything other than the one to be touched furthest in the future, then you will unnecessarily cause a page fault (on the evicted page).

(H) In a paging virtual memory scheme, it is possible that a single memory reference can require three memory accesses. Explain exactly what the locations of these three accesses are.

If there are 2 levels of page tables, this is possible: 1- Read first page table entry (PTE), 2- Read PTE for user process, 3- read desired data

(I) During a system call, why do you need to copy certain arguments from a user process into the kernel?

They are in user's VA space. May not be contiguous in real memory.

(M) Explain how a UNIX file system would find the file named "../../foo".

Read cwd. Find "..". Returns an inumber, read that inode.

Read the dir corresponding to that inode. Look for ".." again.

Read the inode and dir, look for foo, finally have inum of foo.

(F) The Webildem-Ufixem computer company calls you in as a consultant to settle a dispute. Most of their large customers are complaining that their programs that dynamically allocate and free memory are running out of memory. The engineers are convinced that by changing from a First Fit allocation policy to a Best Fit allocation policy, the problems will go away. Upper management is not sure. You have been hired to decide if the engineers are correct and if so (or if not) to explain to upper management why.

Engineers are wrong; neither is better. According to Knuth, if you're on the verge of running out of space, you'll run out.

Spaces: 15, 10

Request stream A: 8 2 11 (first fit fails; best fit doesn't)

Request stream B: 9. 10 6 (best fit fails, first fit doesn't)

(G) Explain, in terms of hardware and MMUs, what a segmentation fault is. How does the OS deal with it?

You try to access a segment that is unmapped in your segment table. Go to disk and get the requested segment. remember to do all that is necessary to find free space in memory!

(I) In the UNIX file system, how many I/Os would be required to read the first byte of the file /usr/bin/ls assuming that none of the relevant files were in memory before you began? (You may assume that directories are a single block long.)

/ inode, / contents, usr inode, usr contents, bin inode, bin contents, ls inode, ls first byte.

6

(J) From Tanenbaum, page 238. A personal computer salesman visiting a university in South-West Amsterdam remarked during his sales pitch that his company had devoted substantial effort to making their version of UNIX very fast. As an example, he noted that their disk driver used the elevator algorithm and also queued multiple requests within a cylinder in sector order. A student, Harry Hacker, was impressed and bought one. He took it home and wrote a program to randomly read 10,000 blocks spread across the disk. To his amazement, the performance that he measured was identical to what would be expected from first-come first-served. Was the salesman necessarily lying? Explain why or why not.

No. Reads are synchronous. That is, there is no prefetching and there was a single thread requesting disk blocks. in other words, there was only a single request in the queue at any time.

(L) For each item below, indicate at which level of the ISO networking model it is implemented. Items: IP, TCP, sendmail, ethernet, routing.

----------------------------
| IP        | network      |
----------------------------
| TCP       | transport    |
----------------------------
| sendmail  | application  |
----------------------------
| ethernet  | data link    |
----------------------------
| Routing   | network      |
----------------------------

True/False Questions

For each problem, indicate whether the statement is true or false, then explain why.

(D) If processes are scheduled according to priorities, you can never have deadlock.

False; low priority process begins, gets a resource. High priority process then runs, gets another resource and requests the one held by low pri. deadlock

(E) If an instruction takes more than one page fault, you should abort the program.

False; you can take a fault on both the instruction and the data and on more complex architectures you might have multiple data accesses.

(H) Random access is just as fast when you have linked files as it is when you have an index structure.

No; you can do readahead earlier in indexed systems.

(J) DMA makes I/O devices go faster.

Nope; just allows you to move data without interrupting the CPU.

(D) Stack-based memory allocation (allocate always memory from the top of the stack) is a fundamentally flawed allocation mechanism and should never be used.

F: Great for procedure locals and return addresses.

(E) Best Fit provides better memory usage than First Fit.

F: Workload dependent. Give example, after defining the metric for "better".

(H) Bitmaps are always the best structure for managing free space on a disk.

F: Construct an example where bitmaps are evil.

(I) Transfer time is the main bottleneck in current disk systems.

F: Seek and rotation dominate.

(J) Typically, File Systems do not allocate contiguous blocks of data on contiguously contiguous sector on disk.

T: Interleaving: Need time to handle interrupts between I/Os.

(K) You can represent a file of any size on a UNIX file system.

F: Even with the multi-level indices there is a limit. what is the limit and why?

(M) The CPU is idle while the disk is transferring data.

F: In most systems, DMA transfers data then interrupts the CPU.

(O) A working set contains all the pages of all the processes for a specific user.

F: Working set is an approximation of the set of pages a single process is using.

VM Design Problem

Let a processor have the MMU architecture shown in Figure 1, below. The basic idea is that you have two tables both implemented in hardware. First, a 3-bit process id (PID) is concatenated with the 11-bit segment number. These 14 bits are used to index into a table with 16K entries. Each entry contains a 6-bit id for a Page Map Entry Group (PMEG). The 6-bit id is concatenated with the 5 bits from the original address, and these 11 bits index into a table of 2K entries that contain the protection information and the physical page number.

Figure 1.

(A) How large can the virtual address space for a single process be? (4 points)

2^29 = 512M

(B) List three advantages of this architecture (6 points)

i Multiple processes in tables at once

ii Needs to TLB

iii Fast: no adders, just 2 memory lookups

(C) Would it be beneficial to add a translation buffer to the MMU? Why or why not? (4 points)

No: a lots of entries are already in hardware. TLB just speeds up translation. Translation already occurs at hardware speed.

(D) Is it still necessary to copy arguments from a user process into/out of the kernel's address space? Why/Why not? (3 points)

Yes: still are changing contexts and address spaces.

(E) Explain how you would modify the OS if there was a change in hardware from a simple MMU that only supports one process to this MMU. Include the exact data structures/classes that would need to change, the routines or method functions that would need to be rewritten, and for each function, how it would change (you needn't provide code, just a detailed design). (12 points)

PageTable changes: 2 arrays. The first is 16K by 6 bits, the second is 2K by however many bits you need for protection, physical address, and status bits.

Do not forget to create and initialize or destroy the two arrays.

Do translation as described in problem. If the address cannot be mapped in the hardware then you have to take a pagefault and ask the kernel to load the hardware.

Add a couple of new interfaces to the MMU: clearPID (invalidate all entries for the current PID).

(F) In this question you are to produce a design document describing how you would implement processes and virtual memory (demand paging) for this architecture. You need not discuss page replacement policies. To help you organize your design document, we have broken it down into the following categories:

ii Describe precisely how to create a process. Include how you create its virtual address space and any other resources that are created or reserved during process creation.

iii Describe exactly what you have to do to context switch in a new process.

iv Describe any modifications to existing system calls.

v Describe any algorithms necessary to support the new MMU.

Design Problem -- Making a Multi-user Operating System

Consider an OS that only supports a single user, although it may support multiple user processes (DOS only supports one user and one process). In this problem, you will design multi-user support for such an OS. You should begin by thinking about a security model. How will you protect users from one another. What parts of the system must change to enforce this security policy? You need not duplicate UNIX functionality, but must design some reasonable policy that would let multiple users peacefully coexist on your system.

This question should require thought but you should not stay up all night working on it. There is not that much here! Get a good night's sleep instead. Your answers to this problem should be in the context of a very simple system -- one that implements the minimal functionality required.

(A) Describe what the boot procedure should do in your system. Where is it located, how does one get to it, how does it work?

Bootstrap is in the master boot sector (a fixed location in disk). it is reached through the bootstrap program, which is activated from ROM. The bootstrap program reads in the code resident in the boot sector, loads it into memory, and starts executing the program. Typically this includes reading other parts of the OS, and bringing more and more of the OS into memory.

(C) For each algorithm listed below, indicate if it must change in your multiuser system and if so, how.

1 disk allocation

2 scheduling

3 page replacement

4 file processing

5 process creation