CS414 Fall 2000 Homework 4 Solutions

Q1)

            See pages 307-313 in the textbook.

 

Q2)

            Page replacement algorithms such as LRU depend on hardware to keep track of page access activities such as number of reads since last page fault, time of last access, frequency of access etc… These things cannot be maintained in software because upon each memory access we have to invoke a software routine to do the bookkeeping.   However, certain algorithms such as FIFO can be implemented without much hardware support.

            The Memory Management Unit (MMU) is a hardware unit that does three important functions.

a)      Virtual address to physical address translation.

b)      Detecting page faults and security violations.

c)      Above-mentioned bookkeeping to facilitate page replacement algorithms.

           Note that the actual page replacement or swapping is done in software by the operating system.          

           

 

Q3)

There are 2 components to this question:

(a)    How long does it take for each interrupt to be handled and

(b)   How many interrupts are handled by the CPU.

 

(a)

            In general, handling an interrupt is expensive.  The primary reasons for this include the following,

a)      The interrupt has to be handled by the OS. Hence the current process must be pre-empted and the interrupt handler routine invoked.  This includes an expensive user to kernel transition.

b)      After the interrupt is handled, another process must be scheduled this is another expensive kernel to user transition.

c)      In pipelines architecture, the instructions being executed have to be meaningfully halted and the pipeline has to be flushed thus decreasing the effectiveness of the architecture.

Note that these transitions are not full context switches and therefore are not as expensive as a process context switch.

 

At the same time, polling involves a busy wait until a favorable response can be read from the device.  This is also wastage of CPU time.  Thus deciding between polling and interrupt driven I/O, one has to take into account the response time of the device.  If the device is synchronous, the response time can be predicted.  If the predicted response time were less than the time for 2 context switches (user to kernel, kernel to user) then polling would be better than interrupt driven I/O.  This is the usual case in network controllers, where each send completes within a predictable time limit (not always but often).  On the other hand printers, which take a long time to complete each task, are better if interrupt driven.   

 

(b)

The load on the processor is an important issue to address: if there are too many interrupts (that is, load/#requests is high from a particular device), the CPU will be interrupted often, and will not be able to do any “useful” work).  Therefore, if the load is high, polling is more advantageous from this perspective.

 

The problem with polling is that it disregards the device requests, and just attempts to service device requests when the CPU wants.  The device may have important requests, which the CPU will ignore.  Thus, regardless of the load, if the device requests are important and the device does not buffer the requests for retransmission, interrupts are more advantageous from this perspective.

 

 

Q4)

            DMA (Direct Memory Access) is a device that transfers data to and from memory to I/O devices such as disk, network controller etc… without the involvement of the processor.  The most important problem that would affect the performance of a DMA device is memory accesses by the processor at the time of DMA operation.  If the system has a single memory bus shared by all devices accessing the memory, then we face the bus arbitration problem between DMA and the processor.  Even if we have separate buses to independently access the memory (also called dual-ported memory), the processor cannot access the memory region accessed by the DMA simultaneously.  (This may result in lack of consistency.)  Further problems are imposed by the virtual memory architecture; the pages currently accessed by the DMA have to be pinned in the memory (cannot be paged out).  This may further decrease the performance of the system.

 

 

Q5)

            Let us consider the display device first.  The display device need not interact with the processor; only the processor needs to interact with the display device.  The typical mode of operation assumes that there is a buffer to which the processor writes the character to be displayed, the display device periodically read this buffer and displays its contents.  The processor needs to maintain the current position in this buffer where it has to write the next character.

           

            The second device is the keyboard.  Typically the keyboard is a low speed, asynchronous, serial, character input device.  This device raises an interrupt whenever it detects a key press.  There is a corresponding interrupt handler which would read the character detected by the keyboard and write it to the buffer in the display device (echo) and maintain records of this in the OS input buffer.  Since the keyboard is a low speed device, its interrupt need not be masked.  (Interrupt masking means if a second interrupt arrives while the first interrupt is being processed, the second interrupt will not be delivered.  Do not confuse this with the non-maskable interrupt.)  

 

The keyboard will have a lower priority than the clock but a higher priority than disk, CD-Rom, etc…   Clock obviously has a higher priority.  Remember that cold boot (Control+Alt+Del) should be effective even during disk accesses.

 

Q6a)

            (Since the root directory is unique to the system, it can be assumed that the root directory information is at a pre-specified block on the disk.)

            Operation 1: Retrieves the directory information for root directory.

            Operation 2: Retrieves the directory information for mydir directory.

            Since the files are allocated contiguously, all we need to keep track of is the disk address of the first block of the file; this is stored in the directory information.  We do not need to keep a big inode structure for contiguously allocated files.  Thus we have the disk address of the first block of myfile. 

 

 

Q6b)

            Since the file is allocated contiguously with 1000 characters per block.  The characters 5300 to 5999 would be one block, 6000 to 6999 in another block and 7000 to 7200 in a third block.  Since we have the disk address of first block of myfile, we can add 5 to it to fetch the block containing characters 5300 to 5999, 6 to fetch the block containing characters 6000 to 6999, and 7 to fetch the block containing characters 7000 to 7200.  Thus we would need 3 blocks to be fetched from the disk.

 

Q6c)

            In order to open the file root/mydir/myotherfile, we need not fetch any block from the disk.  The disk address of the first block of myotherfile is a part of the directory information of mydir, which is already available in the memory.

 

 

Q6d)

            Contiguous allocation is best suited for these operations.  The reasons are the following.

a)      The operations considered are only open and read not write.

b)      The mode of reading is continuous i.e., a range of contiguous characters in the file.

c)      The file only consists of characters and not multi field records.

Indexed allocation has potentially the problem that each access would be directed through the index effectively resulting in two accesses.  Even if the index is cached in memory, we still spend time to load the index into memory.

 

Q7)

            The read and write file pointers will be stored in the per-process file table, since the positions of these pointers are process specific and may vary for different processes which are accessing the same file.  An important overhead associated with maintaining both read and write pointers is that an operation of insert or delete (performed using the write pointer) would affect the position of read pointer.  Thus the read pointers will have to be updated on some inserts and deletes.

            Having multiple read and write pointers can be done as an extension of the above scheme. Inserts and Deletes performed using one of the several write pointers could affect the positions of all the other read and write pointers.  This poses an added overhead.