Comp111 Midterm Exam Answers
Oct 30, 2013 - open books and notes

No electronic devices of any kind allowed, including cellphones, calculators, pagers, etc. Please turn these off and store them out of sight before beginning the exam.

Please show all work on these pages. Make sure that your work for each problem is clearly marked. These pages will be graded and returned electronically, like exercises. Feel free to use the backs of pages as needed.

  1. Consider the following resource allocation matrix:
    Requested Granted Deficit
    R1R2R3R4 R1R2R3R4 R1R2R3R4
    P13 0 0 0 - P13 0 0 0 = P10 0 0 0
    P22 1 1 1 P20 1 1 1 P22 0 0 0
    P30 3 3 3 P30 0 0 0 P30 3 3 3
    Total Available
    R1 R2 R3 R4
    3 3 3 3

    Some people misunderstood the meaning of Available. Problem was graded leniently as a result.

    1. (10 points) Is this a safe situation? Why or why not?
      Answer: First P1 completes and releases its resources. Then P2 is granted P1's extra resources and completes. Finally P3 has all resources and completes. Thus there is a completion schedule and the situation is safe.
    2. (10 points) Suppose that P1 has a bug that causes it to go into an infinite loop. Does the Banker's Algorithm prevent deadlock in this case? Why?
      Answer: The Banker's Algorithm does not prevent deadlock unless all processes complete when granted their resources. In this case, P1 has the resources it requested but never completes. Its failure to complete blocks completion of P2, which in turn blocks completion of P3. Thus there is no completion schedule and the system as a whole deadlocks.
  2. Consider the following code:
    main()
    { 
        struct rlimit res; 
        res.rlim_cur = res.rlim_max = 70; 
        setrlimit(RLIMIT_NPROC, &res); 
        while (1) fork(); 
    }
    1. (10 points) What is the state of the machine while the fragment is running?
      Answer: The fork bomb stops when the current user has 70 processes. This is usually before the bomb itself has 70 processes, as the limit counts against those processes that this user is already running. When the fork limit is reached, all processes continue to run and issue (failed) fork requests. They do not block and do not die. Thus they do not and cannot become zombies, either.

      Many people reported that the process and its children stop when there are 69 children. This is too late. The shell counts as one process in the limit. Running a windows system counts as 35 or so. So that the only thing one can say is that the total processes the user owns do not exceed 70.

    2. (10 points) If one kills all of these processes but one, exactly what happens?
      Answer: The fork bomb continues to run and resaturates the system in short order with 70 processes for this user.

      Contrary to what some people wrote, the killed processes do not necessarily automatically become zombies. They would become zombies only if all but the parent process died. Depending upon where the preserved process falls in the tree of processes, the children of that process become zombies. The other parts of the tree will not, because the parents are killed with their children, which causes init to reap the children!

    3. (10 points) In most shells, the kill command is a built-in function instead of being implemented as an executable program like, e.g., ps. What would happen if kill were an executable program and the owner of the above process tried to use the kill program to kill it?
      Answer: The problem is that running an executable "kill" requires creating a process. In this case, the parent shell is not limited, so that it will run normally provided that there are not other limits in effect that I did not mention. However, in the case of a runaway fork bomb where the process limit is exceeded for the whole machine, one cannot run another process to kill the runaway process!

      Thus the answer is that there is no difference whatsoever between a command "kill" and a builtin "kill" in this case. They do the same thing. The fact that 70 processes are running does not keep the "kill" command from running, due to fair scheduling.

    4. (10 points) Exactly what happens when one runs:
      main()
      { 
          struct rlimit res; 
          res.rlim_cur = res.rlim_max = 70; 
          setrlimit(RLIMIT_NPROC, &res); 
          while (1) if (!fork()) exit(0); 
      }
      Compare this behavior with that of the prior example.
      Answer: Like the previous example, this creates 70 process descriptors. Unlike the previous example, all descendants of this process are zombies, because all children immediately die and their parent stays alive but does not reap them. Thus the process table fills with a total of 70 processes, where the ones created by this fork are all zombies, which count toward the 70 process limit until they're reaped. Again, the parent does not die; it keeps on trying to fork and ignores errors. In this case, killing the parent cleans up the whole mess; the zombies are then reaped by "init".
  3. In some modern experimental operating systems, deadlock is avoided by preventing schedules of execution that cause deadlock from happening. When a deadlock is detected, the schedule of locking that caused the deadlock is recorded, and that specific schedule of locking is prevented in the future by constraining the order of locks in one or more processes so that this particular schedule never again occurs.

    Suppose that the following is executed in process X:

    lock(C); lock(A); lock(B);
    do something(); 
    unlock(A); unlock(B); unlock(C); 
    and that the following is executed in another process Y:
    lock(B); lock(A); lock(C); 
    do_something_else(); 
    unlock(C); unlock(B); unlock(A); 
    1. (10 points) Give a schedule for these two processes that deadlocks.
      Answer:
      XY
      lock(C)
      lock(B)
      lock(A)
      lock(A)
      lock(B)
    2. (10 points) Demonstrate how to prevent this schedule from occurring via lock ordering limits of the form "locking Q in X must precede locking R in Y".
      Answer: As a really silly solution, "locking B in X must precede locking B in Y". This completely serializes the processes.

      Many people got onto the wrong track in this problem. In this algorithm, one is not allowed to reorder locks. The key is to serialize the use of locks by processes, so that they can't conflict and race to a deadlock. People thought that either they could order the lock requests, or that saying that "locking A in X must precede locking A in Y" is a lock ordering. It is not! So many people wrote things like

      locking A in X must precede locking A in Y
      locking B in X must precede locking B in Y
      locking C in X must precede locking C in Y
      which is not correct at all. Only one part of that works properly and is sufficient by itself.
    3. (10 points) Why is a deadlock-free set of limits always possible to define using this scheme?
      Answer: One can always completely serialize programs by -- for example -- setting the last lock of X as a precedent for the first lock of Y. This provides poor but deadlock-free performance. The trick -- of course -- is to be more clever than that and retain as much performance as possible without allowing deadlock.

      Many people said that one can accomplish this by always ordering locks or by lock priority. Unfortunately, that had little to do with the question! The question was whether one can avoid deadlock by ordering locks between processes, not whether one can order locks in individual processes.

    4. (10 points EXTRA CREDIT) What record-keeping in the OS is required in order to implement this deadlock prevention scheme?
      Answer: The key is that one must remember where each lock statement is located in the code, as well as the resource it locks. The address of the lock invocation -- along with the identity of the resource -- uniquely identifies the lock. Thus, a program can lock a resource more than once and this method will distinguish between the lock requests. No one in the class realized this, though, so I did not deduct points for not seeing this.

      It is also very important to record this information for the program and not for the process. Schemes that record prior deadlocks for processes do not achieve the appropriate result. The program -- when run repeatedly -- results in a process that must be managed. Very few people realized that the records should be kept for the program (a file) rather than the process (a memory image).

  4. (10 points) It is considered poor programming practice to write a single-threaded program that dups both the input and the output of an arbitrary child process. Why?
    Answer: While it is possible to write a single unthreaded process that reads from and writes to the same process, this requires use of complicated I/O including the select statement and/or non-blocking read statements. Writing two threads -- one for reading and one for writing -- makes for cleaner and more understandable code, and prevents several kinds of deadlock that can occur when two processes interact with one another asynchronously.

    Several people claimed that dup is only used for redirecting input to files; we have studied many examples to the contrary using pipes. Some people thought using pipes was different than using dup; these are part of the same scheme.