Name: ________________________________ Login: _____________

Comp111 Midterm Exam
Nov 15, 2017 - open books and notes

No electronic devices of any kind allowed, including cellphones, calculators, pagers, etc. Please turn these off before beginning the exam.

Please show all work on these pages. Feel free to use the backs of pages as appropriate. Make sure that your work for each problem is clearly marked. Please place your name on each sheet.

Please note that the amount of space after each question does not indicate length requirements for an answer! If an answer is short, write it succinctly.

These pages will be graded and returned electronically, like exercises.

  1. Consider the following Banker's Algorithm tableau:
    Requested Granted Deficit
    R1R2R3 R1R2R3 R1R2R3
    P1110- P1000= P1110
    P2220 P2210 P2010
    P3002 P3000 P3002
    Total available
    R1R2R3
    222
    1. (10 points) Prove that this situation is safe by giving a schedule for completing all processes.

      Answer:
      • First P3 gets its resources and completes. Note that P3 has requested but unallocated needs that can be immediately met.
      • Then P2 gets its resources and completes. Note that Either P2 or P3 can go first.
      • Once P3 and P2 have completed, P1 can get the one unit of R1 it needs.
      Thus the order is P3, P2, P1 or P2, P3, P1.

      Scoring: +10 if acceptable, +0 if not.

    2. (10 points) Suppose that P3 requests one more unit of R2. Is the resulting situation safe? Why or why not?

      Answer: This is safe. The schedule remains the same.

      Scoring: +10 if acceptable, +0 if not.

    3. (10 points) What is unusual about this tableau -- compared to the others you have seen in class -- and what did processes have to do to cause this situation?

      Answer: To get into this state, P1 had to request 1 unit of R1 and 1 unit of R2 at the same time, in one atomic operation. If P1 requested them one at a time, then it would have blocked until the first request was granted. The state that there are two different resources requested and not granted requires asking for both resources at the same time.

      Scoring: +10 if acceptable, +8 if minor problems, +5 if major problems, +3 if any value.

  2. Consider the following unusual implementation of bounded buffer:
     
    #define SIZE 50
    int begin=0; int end=0; char queue[SIZE]; 
    sem_t notempty;  // initialized to 0
    sem_t notfull;   // initialized to SIZE-1
    void enqueue(char c) {
        sem_wait(&notfull); 
        queue[end]=c; end=(end+1)%SIZE;
        sem_post(&notempty); 
    }
    char dequeue() { 
        sem_wait(&notempty);
        char out = queue[begin]; begin=(begin+1)%SIZE; 
        sem_post(&notfull); 
        return out; 
    }
    
    1. (10 points) If enqueue and dequeue are each called by unique "producer" and "consumer" threads, does this code contain a damaging race condition? If so, what can happen and what would you do to eliminate the race condition?

      Answer: This is not a problem. If there is only one thread calling enqueue and only one calling dequeue, then note that these actually affect different shared memory variables, so there is no critical section to protect.

      Scoring: +10 if acceptable, +8 if minor problems, +5 if major problems, +3 if any value.

    2. (10 points) If enqueue is instead called by multiple producer threads, does this code contain a damaging race condition? If so, what can happen, and what would you do to eliminate the race condition?

      Answer: This invokes a potentially damaging race condition between the enqueues. Suppose, for example, that two threads start an enqueue at the same time. Then it is possible that one enqueue will be lost.

      To fix this, protect the second line of enqueue with a mutex lock.

      Scoring: +10 if acceptable, +8 if minor problems, +5 if major problems, +3 if any value.


  3. Consider the following two fragments of code:
    process 1 process 2
    lock(A) 
    lock(B) 
    lock(C) 
    lock(D) 
    lock(E) 
    lock(F) 
    lock(G)
    lock(A)
    lock(B)
    lock(D)
    lock(E)
    lock(F)
    lock(C)
    lock(G)
    1. (10 points) Can concurrent executions of process 1 and process 2 deadlock? Why or why not?

      Answer: It can't deadlock. The reason is that once one process acquires lock A, the other one must wait until that process unlocks.

      Scoring: +10 if acceptable, +8 if minor problems, +5 if major problems, +3 if any value.

    2. (10 points) Now remove the first two lock statements from each fragment and answer the first question again. Can the new schedule deadlock? Why or why not?

      Answer: Consider the following schedule:
      process 1process 2
      lock(C)
      lock(D)
      lock(D)process 1 blocked
      lock(E)
      lock(F)
      lock(C)process 2 blocked
      Thus there is a schedule that causes deadlock.

      Scoring: +10 if acceptable, +8 if minor problems, +5 if major problems, +3 if any value.

  4. (10 points) One program mallocs a million structs in the heap and then exits without freeing them. Is this a problem? Why?

    Answer: This is not a problem. When the program exits, all memory is freed. There is no reason to free something if the program will exit.

    Scoring: +10 if acceptable, +8 if minor problems, +5 if major problems, +3 if any value.

  5. (10 points) Another program mallocs a million structs, frees them, and then keeps running. Is this a problem? Why?

    Answer: This is a problem. The million structs are "free", but they remain allocated. Thus there is a large memory map that may not be used again. This is especially true if the structs are smaller than the future needs of the program. They remain in memory until the program exits.

    Scoring: +10 if acceptable, +8 if minor problems, +5 if major problems, +3 if any value.

  6. (10 points) Since an IA64 processor doesn't represent a segment table in 64-bit mode, why does the IA64 version of linux still represent segments of processes inside the kernel?

    Answer: The representation of segments is necessary to keep track of which pages are read-only and which are read-write. Even though the processor organizes this memory by page, it is more efficient in the operating system to organize it by segment. This reduces the space needed for descriptors.

    Scoring: +10 if acceptable, +8 if minor problems, +5 if major problems, +3 if any value.

  7. (10 points extra credit: advanced) The "docker" utility in linux allows one to run a whole instance of linux as a user process. How does docker handle memory mapping, given that it is literally running its "processes" inside one actual linux process?

    Answer: What one needs to understand first is that Docker is not a full virtualization system. It is a clever way of running a kernel as a regular process. This is much less intense than true virtualization. Docker uses everything a regular process can do to best advantage.

    Docker actually does not do any memory management itself. It relies upon the memory management already implemented for regular processes. Its internal "processes" are mapped to segments in its memory map. The regular OS memory management handles page faults!

    Scoring: +10 if acceptable, +5 if partially correct.