CS414 Fall 2000 Homework 3 Solutions

 

Q1a)

      Physical Address: 21 bits = 9 bits page frame # + 12 bits page offset.

 

      Virtual Address: 22 bits = 3 bits segment # + 7 bits page # + 12 bits offset. (#pages = 2^19/2^12 = 2^7 => 7 bits page #.)

 

      Page Table Entry:  9 bits page frame #. (Note that the page number need not be stored in the page table.)

 

      Segment Table Entry:  19 bits segment length + 9 bits page table address. (Since each segment could be 2^19 bytes, the length needs 19 bits to be specified.  The other entry in the segment table is the page frame number (9 bits) in which the page table is found.  It is assumed that in a paged system, the page tables also are a set of pages.)

 

      Base Register: 9 bits segment table address. (The page frame # in which the segment table can be found in memory).

 

      Length of Page Table: 2^7 * Size of page table entry. Per segment.

 

      Length of Segment Table: 8 * Size of segment table entry.

 

 

Q1b) (Assume same parameters as Q1a)

      Physical Address: 21 bits = 9 bits page frame # + 12 bits page offset.

 

      Virtual Address: 22 bits = 10 bits page # + 12 bits page offset. (Assuming 22 bits from Q1a.  Note that the size of virtual address need not be the same as physical address.)

 

      Page Table Entry: 9 bits page frame #.

 

      Base Register: 9 bits page table address.  (Assume that the page table is stored at a page frame boundary in the memory.)

 

      Length of Page Table: 2^10 * Size of page table entry.

 

Q1c)  (Assume 30 bit virtual address, 4 bytes per page table entry.)

 

      Physical Address: 21 bits = 9 bits page frame # + 12 bits page

Offset.

 

      Inner Page Table Entry: 9 bits page frame #.

 

      Outer Page Table Entry: 9 bits inner page table address. (Each entry in the outer page table gives the page frame in which that inner page table is stored in memory.)

 

      Inner Page Table Length: 2^12 (In 2 level paging, the page table is further paged.  Hence the size of each inner page table is the page size 2^12.  With 4 bytes per each entry, this means there would be 2^10 entries in each inner page table.)

     

Outer Page Table Length: 2^8 * size of page table entry (Total number of pages = 2^18, Number of entries per inner page table = 2^10, hence number of inner page tables = 2^18/2^10 = 2^8.  Each inner page table needs an entry in the outer page table.)

 

      Base Register: 9 bits outer page table address.  (Again we assume that the outer page table will start at page frame boundary in the memory.)     

 

      Virtual Address: 30 bits = 8 bits page table # + 10 bits page # + 12 bits page offset. (2^8 entries in outer page table, 2^10 entries in inner page table.)

 

 

 

 

Q2a)

      Note that after compaction, all the allocated memory would be contiguous.  In the example considered, there are totally 8000 (4000+1000+2000+1000) bytes of available memory.  Thus each of the following suggests possible free list after compaction. (0, 8000), when compaction occurs towards the end, (12000, 8000), when compaction occurs towards the beginning, and (4000, 12000), when compaction occurs towards beginning and end.

 

 

Q2b)

      Segments are variable sized while pages are fixed size.  Segments are logical units usually visible to the programmers while pages are transparent to the programmers.  The segments form a 2 dimensional address space, while the pages form a contiguous linear address space.  Segmentation suffers from external fragmentation while paging from internal fragmentation.

 

 

Q2c)

      First Fit: The first request of 1600 bytes would be allocated from the space at (10000, 4000) changing it to (11600, 2400).  The second request of 2000 bytes would be allocated to (11600, 2400), changing it to (13600, 400).  The third request of 1000 bytes would be allocated to (4000, 1000) deleting this entry from free list.  The fourth request of 2000 bytes would be allocated to (1000, 2000) deleting this entry also.

 

      Best Fit: The first request of 1600 bytes would be allocated from the space at (1000, 2000) changing it to (2600, 400).  The second request of 2000 bytes would be allocated to (10000, 4000), changing it to (12000, 2000).  The third request of 1000 bytes would be allocated to (4000, 1000) deleting this entry from free list.  The fourth request of 2000 bytes would be allocated to (12000, 2000) deleting this entry also.

 

      Worst Fit: The first request of 1600 bytes would be allocated from the space at (10000, 4000) changing it to (11600, 2400).  The second request of 2000 bytes would be allocated to (11600, 2400), changing it to (13600, 400).  The third request of 1000 bytes would be allocated to (1000, 2000), changing it to (2000, 1000).  The fourth request of 2000 bytes cannot be allocated without compaction because of external fragmentation.  The free space list now reads (13600, 400), (4000, 1000), (2000, 1000), and (15000, 1000).

       

 

Q2d)

      Since the page size is 1000 bytes, the first three digits would represent the offset while the remaining higher order digits would give the page number. 

 

      Virtual Address 2200: The page number is 2 and the offset is 200.  The third entry in the page table gives the page frame number to be 2.  Thus the physical address is also 2200. (Note that the first entry corresponds to page number 0.)

 

      Virtual Address 3500: The page number is 3 and the offset is 500.  The fourth entry in the page table gives the page frame number to be 3.  Thus the physical address is also 3500.

 

 

Q2e)

      See pages 317-321 of textbook.

 

Q3a)

      Segment Table Entry: 19 bits segment length + 10 bits page table address. (The page table address will be the same as the segment base address.  Only the page frame number need be specified because it is a paged system, and the segment can only start at page frame boundaries in the physical memory.)

 

 

Q3b)

      Page Table Entry: 10 bits page frame number.

 

 

Q3c)

      TLB Entry: 11 bit key + 10 bit value, where

            Key: 4 bit segment # + 7 bit page #.

            Value: 10 bit page frame number.

      (The segment number needs to be a part of the key because the TLB is used to cache all the segments’ page table entries and we need to distinguish between them.)

 

 

 

Q4a)

      The page replacement policy defined in the question is the FIFO page replacement policy.  Since the matrices are stored in the row major order, each row of the matrix would be stored in 4 pages.  Thus we would need 4096*4 pages totally to store each matrix.  In each iteration of the j loop, the value of i and j are fixed.  Thus access to matrix C happens only once and involves only 1 page fault.  Each access to matrix B involves a different row and hence a different page hence there would be 4096 page faults accessing B.  The accesses to matrix A are all from the same row and hence only 4 pages of matrix A are accessed.  However because of FIFO page replacement policy, A’s page becomes the oldest once in approximately 20 page faults and hence will be replaced.  Thus there would be approximately 200 page faults on A’s pages.

 

 

Q4b)

      Using a column major storage for B matrix (not for A) would decrease the performance of the system considerably.  Now altogether we need to read only 4 pages of B and hence there would be at most 9 page faults (4 A + 4 B + 1 C) in each iteration of j.

 

 

Q4c)

      If only we could follow different page replacement policies for A, and B, C, then we could save a lot of page faults.  We need to allocate only one page for A, and C, which alone is, replaced leading to just 5 page faults in all, while the 19 pages can be used to allocate B.  Further by using a LIFO page replacement policy, where in we select the youngest page for replacement.  This would gives us 4096 page faults in one iteration of j but will save us at least 18 page faults in the next.

 

      Under the condition that the pages of matrices A, B, and C cannot be distinguished, LRU page replacement strategy would give us some benefits.  While we would not be able to cut down on the number of page faults of B, we would certainly be able to cut down on the unnecessary page faults on A.  Thus A would only produce 4 page faults instead of the 200 found above.  This policy differs from the optimal in that it cannot save 18 page faults from the next iteration.  But we could avoid unnecessary page faults on A.

 

 

Q5)

      Internal fragmentation is said to occur when portions of memory already allocated to a process is not used in full by it, and hence preventing other processes from utilizing it.  Examples of internal fragmentation can be found in fixed size segments, paging systems, and buddy system.

 

      External fragmentation occurs when the allocated portions of memory leave out small fragments of unallocated memory each of which is too small to be useful to any process.  Thus processes are not able to use available memory even though the total amount of available memory may exceed the needs of a process.  External fragmentation occurs while using variable size segments when compaction is not used.