## Comp111 -- Midterm Exam Review 2 Answers Midterm Mon Nov 6, 2006 Open book and notes

1. Consider the resource allocation diagram:

1. (15 points) Is this situation deadlocked? Why or why not?
Answer:
No. A deadlock requires that resources be unavailable due to an existing lock. In this case, resources remain available so that any one process can complete.
2. (15 points) Is this situation "safe" according to Lamport's bankers' algorithm (is there a way for all processes to complete)? Why or why not?
Answer:
Translating this into matrix form
```
request  allocated
R1  R2   R1  R2
P1  1   0    0   0
P2  0   1    1   0
P3  1   0    0   1
available:   1   1
```
and the maximums for both R1 and R2 are two. Choosing P1 and allowing it to complete, we obtain
```
request  allocated
R1  R2   R1  R2
P1  0   0    0   0
P2  0   1    1   0
P3  1   0    0   1
available:   1   1
```
Choosing p2, and allowing it to complete:
```
request  allocated
R1  R2   R1  R2
P1  0   0    0   0
P2  0   0    0   0
P3  1   0    0   1
available:   2   1
```
It is clear that P3 can complete, so the situation is safe.
3. (10 points) Does safety in the bankers' algorithm eliminate all possible kinds of deadlock? Why or why not?
Answer:
Deadlock occurs when two or more processes are waiting for one another in some way. Regardless of the causes of these waits, they are in essence "synchronization events" that can be modelled as shared resources. We saw in the homework that message-passing actions can be thought of as contention for a message channel or buffer, while semaphores are in themselves resources (with one unit of allocation). So in general, the banker's algorithm can stop all deadlock provided that all forms of contention are modelled as resources.
4. (10 points) Does lack of safety in the bankers' algorithm guarantee deadlock? Why or why not?
Answer:
No. An unsafe state is one that can potentially lead to deadlock, not one that will deadlock. For example, there is no accounting for the situation in which a process dies suddenly from causes unrelated to the deadlock, e.g., from signal handling. Sudden death of a process in an unsafe state can lead to a safe state.
2. In the pseudo-code on page 231 of Stallings:
```
monitor boundedbuffer;
char buffer[N];
int nextin=0, nextout=0, count=0;
cond notfull, notempty;
void append(char x)  {
if(count==N) cwait(notfull);
buffer[nextin]=x; nextin=(nextin+1)%N; count++;
csignal(notempty);
}

void take(char y) {
if (count==0) cwait(notempty);
y=buffer[nextout]; nextout=(nextout+1)%N; count--;
csignal(notfull);
}
void producer() {
char x;
while(TRUE) { produce(x); append(x); }
}
void consumer() {
char y;
while(TRUE) { take(y); consume(y); }
}
```
1. (10 points) What is the difference between implementing this as a monitor and implementing it with semaphores? How would the code change in implementing it with semaphores, and why?
Answer:
The main difference between a semaphore and a monitor is that semaphores have a concept of state while monitors only have a concept of signal; a signal changes the state of a semaphore (so that this state must be managed explicitly) while in monitors there is no need to manage this state. The main difference is that binary semaphores have to be initialized.
2. (10 points) What is the precise effect of eliminating both "if" statements from "append" and "take" so that the cwaits are called unconditionally?
Answer:
Signals aren't stacked; if a signal occurs and a wait isn't happening, there is no effect. In this case, both cwaits wait forever and there is deadlock.
3. (10 points) In the book, semaphore code often requires groups of semaphores to control one resource. Does making a single semop call to manipulate a group of semaphores execute faster than making separate semop calls for each semaphore? Why or why not?
Answer:
The key is that each system call requires a switch to kernel mode, as well as a switch back. By reducing the number of system calls, one eliminates the overhead of switching between kernel and user mode several times, for no particularly good reason. So semop grouped operations are intrinisically faster.
• Consider the lock priority algorithm for deadlock prevention.
1. (10 points) Consider process A that locks W, X, and Y while process B locks X and Z. Show that deadlock cannot occur by enumerating the possible execution cases with time/space diagrams; show the order in which each process gets the lock on each resource.
Answer:
There is only one resource in contention, so deadlock is impossible. There are only two cases: P1 locks X before P2, and vice versa; other locks are not shared and thus cannot create contention.
2. (10 points) Why don't we simply force all processes to lock resources in a fixed and predetermined order?
Answer:
Processes are not written by a single individual, but instead a large community of people. Lock order also depends on process algorithms. It is simply too much work to re-engineer all processes to conform to a fixed lock order and -- worse -- the efficiency of many algorithms depends upon a particular order for locking resources.