Name: ________________________________ Login: _____________

Comp111 Midterm Exam
Nov 10, 2015 - open books and notes

No electronic devices of any kind allowed, including cellphones, calculators, pagers, etc. Please turn these off before beginning the exam.

Please show all work on these pages. Feel free to use the backs of pages as appropriate. Make sure that your work for each problem is clearly marked. Please place your name on each sheet.

Please note that the amount of space after each question does not indicate length requirements for an answer! If an answer is short, write it succinctly.

These pages will be graded and returned electronically, like exercises.

  1. Consider the following Banker's Algorithm tableau:
    Requested Granted Deficit
    R1R2 R1R2 R1R2
    P142 - P100 = P142
    P221 P201 P220
    P341 P341 P300
    Total available
    R1R2
    5 2
    1. (10 points) Suppose that P3 requests one more unit of R1. Is the resulting situation safe? Why or why not?

      Answer: Yes. The resulting tableau would be:
      Requested Granted Deficit
      R1R2 R1R2 R1R2
      P142 - P100 = P142
      P221 P201 P220
      P351 P341 P310
      Total available
      R1R2
      5 2
      This still has a completion schedule P3, P2, P1. First P3 receives one more unit of R1, resulting in the tableau:
      Requested Granted Deficit
      R1R2 R1R2 R1R2
      P142 - P100 = P142
      P221 P201 P220
      P351 P351 P300
      Total available
      R1R2
      5 2
      Thus P3 completes, resulting in the the tableau:
      Requested Granted Deficit
      R1R2 R1R2 R1R2
      P142 - P100 = P142
      P221 P201 P220
      Total available
      R1R2
      5 2
      At this point P1 or P2 can complete, followed by P2 or P1. Thus the tableau is safe.

      Scoring: +10 if correct, +8 if didn't give the resulting completion schedule. +0 if claimed that situation was unsafe (or equivalent statement).

    2. (10 points) Is it possible for P2 to request one more unit of R1 in this tableau situation? Why or why not?

      Answer: No. Because Requested != Granted for P2, P2 is currently blocked. So it can't request anything (yet).

      Scoring: +10 if correct, +5 if did not realize that P2 is blocked and otherwise answered correctly. +0 if answer not sensible.

      About 1/2 of the class missed this problem even though I went over an almost identical situation in the review session!

  2. (10 points) When malloc is implemented using the buddy system described in class, what does the following code print? Why?
     
    int *p = (int *) malloc(sizeof(int)); *p=1;
    int *q = (int *) malloc(sizeof(int)); *q=2;
    int *r = (int *) malloc(sizeof(int)); *r=3;
    free(p); free(q); free(r); 
    p = (int *) malloc(sizeof(int)); 
    q = (int *) malloc(sizeof(int)); 
    r = (int *) malloc(sizeof(int)); 
    printf("p=%d, q=%d, r=%d\n", *p, *q, *r); 
    


    Answer: It prints
     
    p=3, q=2, r=1
    
    because the free call links things at the head of the free list. All three of these go into the free list, so that their order is reversed. Then they are reallocated in reverse order.

    Scoring: +10 if correct, +5 if reversed. +3 if major interpretive issues. +0 if not sensible.

    A large number of people did not realize that an O(1) linked list is LIFO (last-in, first-out), not FIFO (first-in, first-out). Link at head/remove from head results in both O(1) and stack-like behavior.

  3. (10 points) malloc abandons the buddy system for memory blocks greater than one page in size, and keeps all of the larger blocks in one linked list. Why does it stop using separate linked lists for each size?

    Answer: There are two main reasons:
    1. The time/space tradeoff fails to be profitable for large memory sizes, and
    2. Memory use -- like disk files -- follows the Pareto distribution. Most allocations are small, and very few allocations are large.
    In non-mathematical terms, allocation of large memory blocks is relatively unlikely, so that the runtime advantage of the buddy system is simply not worth the memory waste necessary to implement it.

    Several common misconceptions:

    Scoring: +10 if correct, +7 if minor issues with answer (e.g., invalid assumptions), +4 if major mistakes were made, +0 if not sensible.

  4. (10 points) In assignment 3, I claim that the code fragment:
     
    sem_init(&anthill_semaphore, 0, 3); // # of spaces for aardvarke on hill "hill" 
    sem_init(&ants_semaphore, 0, 100);  // # of ants inside hill "hill" 
    // ... and then, later ... 
    if (sem_trywait(&anthill_semaphore)==0) {
      if (sem_trywait(&ants_semaphore)==0) { 
        slurp(hill, aname) 
      } 
      sem_post(&anthill_semaphore); 
    } 
    
    works properly, but that the code fragment
     
    int anthill_count=3; // # of spaces for aardvarks on hill "hill"
    int ant_count=100;   // # of ants inside hill "hill" 
    // ... and then, later ...
    if (anthill_count > 0) { 
      --anthill_count; 
      if (ant_count > 0) { 
        --ant_count;
        slurp(hill, aname); 
      } 
      ++anthill_count; 
    } 
    
    does not work properly. Give an example in which the second code fragment does not work properly, and then explain how the first code fragment prevents the situation you describe.

    Answer: A problem arises when the second fragment is executed in parallel by two aardvarks. Suppose that anthill_count is 2. Then consider the schedule
    if (anthill_count > 0) {  
     if (anthill_count > 0) {
    --anthill_count 
     --anthill_count
    This schedule is possible if the first process is made unrunnable just after the first statement, which is actually reasonably likely. In this case, both aardvarks conclude that there is space remaining and one of them sulks.

    The first solution forces the race to be won by one of the two aardvarks because of the atomicity of sem_trywait. The other one loses the race, and the number of positions never goes below 0.

    It is also possible in the second example for two aardvarks to go after the same lone ant in like manner.

    Scoring: +10 if perfect, +8 if minor errors, +5 if insufficient detail, +3 if any value.

  5. (10 points) There are many cases in which my examples do not clean up after themselves before process exit like the examples of programming you might have seen in COMP15. Give a few examples of this, and in each case, explain why cleanup is unnecessary just before process exit.

    Answer: The exit of a process causes several kinds of automatic cleanup.

    We learned in assignment 3 that some things are not cleaned up after process exit, including named and unnamed semaphores, shared memory segments, etc. Since I made the mistake of reporting that unnamed semaphores were cleaned up, I did not deduct for listing them, but did not credit them either.

    In general, anything that is no longer accessible to other processes is cleaned up, but many things that are accessible to other processes persist after process exit. So, though local file descriptors are closed, any other file descriptors in other processes remain open, and their kernel descriptors continue to be open. A file is only closed when the last descriptor open in any process is closed. This includes dup'd descriptors.

    Scoring: no one described more than three of the above six examples. So score was +10 if three valid examples, +8 if two valid examples, +6 if one valid example, +4 if one valid example plus errors or invalid examples, +2 if any redeeming value. Consider the following code to implement a simple pager like more(1):

     
    // ... miscellaneous #include's here ...
    #define SIZE 256
    #define READ 0
    #define WRITE 1
    #define TRUE 1
    #define FALSE 0 
    main()
    {
        int lines=0; int p[2]; char buf[SIZE];  
        pipe(p); 
        if (fork()) {
    	FILE *reader = fdopen(p[READ], "r"); 
            close(p[WRITE]); // here 
    	while (! feof(reader)) { 
    	    fgets(buf, SIZE, reader); printf("%s", buf); lines++; 
    	    if (lines==20) { 
    	       printf("more?\n"); fgets(buf, SIZE, stdin); lines=0; 
                } 
            } 
        } else {
            close(fileno(stdout)); dup(p[WRITE]);  close(p[WRITE]); close(p[READ]); 
            execl("/bin/cat", "/bin/cat", "some_big_text_file", NULL);
        }  
    }
    
    1. (10 points) How much of "some_big_text_file" is read into the pipe at a time? Why?

      Answer: Since the cat command is talking to a pipe, it block buffers its responses. So that it writes one block of the file at a time. Depending upon the OS, this is between 2048-8192 bytes.

      I accepted any reasonable power of two, but you had to identify it as a block size, not the pipe size.

      Almost everyone answered 256. This is not what is read into the pipe by cat, but what is read out by the parent. Some people answered 20*256, which has nothing to do with pipe semantics.

      Scoring: +10 if correct, +5 if close.

    2. (10 points; advanced) If you comment out the line marked "// here", the parent hangs forever even after the child "cat" exits. Why?

      Answer: This is incredibly subtle. The parent loop exits when it determines an feof. This is detected when the child closes the descriptor upon child exit. But if the parent leaves the descriptor open, then the file is considered open when the child exits and feof waits forever.

      I actually discovered this behavior when crafting the problem. I couldn't resist asking you about it.

      Scoring: +10 if correct, +5 if close.

  6. (10 points) Give an example of a process or set of processes for which the Least-Recently-Used algorithm for memory frame recovery performs poorly, and explain why.

    Answer: Recall from lecture that the Least-Recently-Used flush strategy works well for linear code but rather poorly for loops spanning multiple frames. The problem is that it is possible for LRU to infer that a frame is unused just before the program loops back to it. Thus, the counter-example is a program with long multiple-frame loops, in which every time through the loop, the process has to block and wait whenever execution jumps back to the initial frame of the loop.

    A small number of students realized that loops containing subroutine calls perform even less well, because the subroutine consumes frames that are needed in the loop.

    Scoring: +10 if fine, +5 if relevant.

  7. (10 points) When a process "thrashes" due to swapping, due to unavailability of enough physical memory, is the time spent swapping counted toward process user time, process system time, or operating system time? Why?

    Answer: Operating system time. Remember that user time is the time spent by the process in user mode, and that is certainly not thrashing time. But system time is the time spent by the process in executing system calls on behalf of the process. So that is not thrashing time, either. It is possible to observe an increase in system and user time due to thrashing, but this is due to timing artifacts and not because thrashing is directly contributing to those times.

    A number of people answered "process system time", which is not true, because the act of swapping is not a system call.

    Scoring: +10 if correct, +5 if partly right or unsure with the correct answer not ruled out!

  8. (10 points extra credit: advanced) sbrk always zeros each allocated frame before re-allocating it as a new page. This is an apparent violation of the unwritten rule to do everything as quickly as possible. Why is the new frame not just passed to the process as is, without initialization?

    Answer: Not zeroing the re-allocated frame is an extreme security risk. Remember that the frame pool is shared by all processes, and that -- on average -- when you ask for a frame it is taken from another process's frames. If frames weren't zeroed, you could write a program to allocate frames deallocated by other processes and spy on the contents, thus spying on every running process!

    As a second (and somewhat less extreme) reason, not zeroing the frame leads to programming anomalies that are only manifest due to pre-existing frame contents, making debugging extremely difficult. For example, you could have a program that fails to initialize a linked list element, and -- based upon what values the program inherits from a reused frame -- the bug could either crash the program or not. This would be a debugging nightmare. In debugging folklore, we refer to this as a "PoM problem": program behavior varies with the "Phase of the Moon", meaning that behavior varies in unpredictable ways depending upon the environment in which the program is executed, rather than varying based upon the input to the program. Since we can vary the input, but not the environment, such bugs can be very difficult to find. In essense, this violates the principle of isolation, that all processes should be unable to affect each others' behaviors.

    Many people thought that this was to clear the storage descriptors. Zeroing the whole page is a heavyweight way to accomplish this. One could simply zero the storage descriptors themselves. Since at page allocation, there is only one for the whole page, this is very fast.

    Scoring: +10 for first reason, +5 for anything similar to second reason.