CS414 Prelim
1: In Class, no notes. 10:15 to
11:25.
Four problems,
25 points each
A sample crossword puzzle:
Across:
1.
Witch’s pet
2.
Blueberry _________
Down:
1.
Newton’s inspiration
a)
Obtain mutual exclusion,
using a semaphore, prior to writing (or erasing) each letter. Release the mutex after doing so. One semaphore per square on the puzzle.
This solution won’t work: if two people try to write the same word at the same time, their letters can intermingle, creating chaos. For example, if you want to write “stone” and I want to write “apple” in 1-down, we could end up with a mixture of letters from the two words.
b)
Same as (a), but with one
semaphore for the entire puzzle.
This has exactly the same problem as (a) and for
exactly the same reason.
c)
Obtain mutual exclusion
prior to writing (or erasing) each word, using a single semaphore associated
with the first letter of that word,.
Release the mutex after writing the word. One semaphore per number on the puzzle. In our example, to write “apple”, number 1 down, you need the
semaphore for the cell containing the “a”, numbered 1-down.
Working a word at a time helps – we won’t conflict
now for the word in 1-down. But
consider 1-down and 2-across in our example: I might write “stone” while you
write “pie” and again, we’ll get some unpredictable outcome. In general, we need to guarantee that a word
is written while holding mutex relative to all other words that overlap the
same spaces. So, this solution is still
flawed.
d)
Same as (c) but with a
separate semaphore for each across-word and one for each down-word, even when
they start on a single cell. If a
single cell was the start of both a down and an across word, it would have 2 semaphores.
This is flawed for the same reason that (d) is flawed.
e)
Same as (c) but before
writing or erasing a word, you obtain mutual exclusion for every word your word
will overlap. For example, if 9-down
crosses 3 and 6 across, you’ll obtain mutual exclusion using the semaphores
associated with all three words.
3. Bakery Algorithm:
a)
Process 3 is still in the
critical section, yet process 0 doesn’t wait when it executes the while loop
“while(number[3] and (number[3],3) < (number[0],0)) do no-op”
In this case, number[3] will have remained set to 7, so number[0] will be set to at least 8. When evaluating the while condition, it will be “true” so process 0 will wait. Thus this scenario cannot arise.
b) Process 3 is able to leave and re-enter the critical section three
times before Process 0 is able to enter it even once.
This situation can
definitely arise. Nothing tells us that
process 3 will step to the side and let process 0 run right now – all we can
assume is finite progress, not immediate progress. So the scheduler could easily allow process 3 to run for a long
time, letting it leave and re-enter many times before process 0 finally gets
in.
c)
Process 0 sets number[0] to
5.
This could occur if process
3 leaves the critical section, and all number[i] are now zero. Next, suppose that 4 processes try to enter
one by one, and the 4th has number[] set to 4. If process 0 finally gets to execute at that
point, it will pick number[0] equal to 5.
d)
Process 0 sets number[0] to
7.
The same scenario described above can also lead process 0 to pick value 7, if 6 processes enter sequentially before it manages to set its own number to max(…)+1
3.
Consider the process-resource wait graph shown below.
a)
Give a reduction sequence whereby we can show that the system is not
currently deadlocked.
b)
Could the next resource
allocation cause the system to become deadlocked? Justify your answer – no credit for saying “yes” or “no” without
explaining.
No. The nxt allocation must either be one unit of R1 to P1, or one unit of R1 to P5, because R2 is currently fully allocated. In either case the system is free of deadlock – if we draw the new resource graphs for both cases, in fact, the same reduction sequence just given will still fully reduce the graph. In general, a deadlock free resource-wait graph cannot become deadlocked by doing an allocation – an extra request must also occur.
4. Suppose that you are solving a computational application in
which three kinds of operations need to be done on a large matrix, and each
takes time t units. You design your application with 3
lightweight tasks (processes) and implement a monitor style of synchronization
for various data structures that are shared between the tasks. For simplicity, let’s also assume that the
tasks include all the computing needed to solve the problem – there is no code
to execute before the three tasks start and once they finish, the program is
done.
Now, suppose that you run
this program on a machine that has 4 identical CPU’s and measure the running
time for a series of experiments in which you vary the number of CPU’s assigned
to your program by the O/S.
a) Would you expect the program to run twice as fast with 2 CPU’s
and three times as fast with 3 CPU’s?
Explain.
Not really. Ideal speedups tend to ignore scheduling and synchronization delays, which can be considerable. At best, we would expect an n-fold speedup with n <=3 CPU’s but the actual number might be much worse.
c) What would you expect to see if you tell the O/S to assign 4 CPU’s to your program?
Either it won’t help or it might even hurt, slowing things down a little. The case where things slow down is a bit obscure: the extra CPU could result in lower quality caching by the computer, since the active tasks actually “fit” 3 CPU’s very well, and hence you get the best quality of hardware caching with 3 CPU’s and by keeping the tasks on one CPU each. If we add a fourth CPU and use it, we need to do all sorts of cache flushing and it could slow things down quite a bit.