Comp111 Final Exam Answers
Monday Dec 21, 2015 - open books and notes

Any and all printed and written materials are allowed, including books, course notes, and handwritten notes.

No electronic devices of any kind are allowed, including cellphones, calculators, pagers, laptops, etc. Please turn these off before beginning the exam.

Please show all work on these pages. Feel free to use the back of each page as needed. Make sure that your work for each problem is clearly marked, especially on the backs of pages. These pages will be graded and returned electronically, like exercises, and the results will be made available online.

Please note that point values have no correspondence with difficulty. We suggest that you solve the problems that you best understand first.

Please also note that the space I gave you to answer is the space I would need, and not necessarily the space you will need. There is no deduction for more verbose answers (though succinct and comprehensive answers are very much appreciated by your instructor!).

  1. What behaviors outside the disk paging system are dependent upon what the paging system does? Why?

    Answer: The paging system serves as a caching system for the filesystem driver, and caches all aspects of the filesystem, including the super block, bit arrays for allocated inodes and blocks, and the inodes and blocks themselves. Due to the principle of locality, needed blocks tend to stay in cache, so that the paging system dramatically improves the performance of the filesystem driver.

    Caveats and comments: The paging system has nothing to do with process swapping. It deals with disk files.

    Scoring: +10 if acceptable; +8 if on the proper track but incomplete; +6 if minor errors; +4 if major errors; +2 if any redeeming value.

  2. What are the disadvantages of building a filesystem without knowledge boundaries between the paging system and the filesystem driver?

    Answer: Both systems become more complex to write as a single unit. The paging system is a relatively simple tool that creates a cache of blocks. Embedding that function into the filesystem driver makes the whole system more complex to write. Maintaining knowledge boundaries makes the whole system a bit slower, but the code is easier to change and update.

    Another valid point is that the modularity of the two systems allows one to reuse the same pager with multiple filesystem drivers, etc. Putting the code into the filesystem driver means one must duplicate it for each kind of filesystem.

    Caveats and comments: Many people thought that the resulting system without knowledge boundaries would be slower or exhibit poorer performance. The exact opposite is true: the best justification for removing knowledge boundaries is that the resulting system can run faster and be better optimized, e.g., by utilizing definite knowledge of (rather than assumptions about) the contents of the actual page cache in allocating inodes and blocks. There is thus a tradeoff between modularity and performance.

    (I just read a Ph.D. thesis that does exactly that with knowledge boundaries in network drivers; the thesis is to remove some of the knowledge boundaries and make the network run faster as a result.)

    Other major misconceptions included that:

    Scoring: +10 if acceptable; +8 if minor errors; +5 if major errors (including claiming performance degradation or one of the other major misconceptions above!); +2 if any redeeming value.

  3. What bad things can happen if a heavily used service lacks admission control (long term scheduling)?

    Answer: The system can be overwhelmed with requests and performance can suffer -- catastrophically. Faced with too many requests -- or, equivalently, too much demand for existing resources like memory -- the system can thrash trying to service all requests simultaneously. Further, the system can be imbalanced, and thus service times can increase without bound, leading to inability to meet deadlines.

    Caveats and comments: Most people wrote something acceptable or equivalent to the above.

    Scoring: +10 if acceptable; +8 if minor omissions or misstatements; +5 if a valid but weak reason was given.

  4. A few years ago, I overheard an interesting conversation in the lounge, in which a student claimed to have hacked Provide and was able to change grades. At the time, the protections for all directories in Provide were group grade111, drwxrws--x and the actual grading files, two directories down, were owned by the student and in group grade111, with protection -rw-rw----.

    How was it possible to change grading files? What did the student need to know in order to accomplish this?

    (This has been fixed!)

    Answer: Since the directories are drwxrws--x, anyone can open a file in the directory if its name is known. The student had "--x" permission on the directories, which means that one can open a contained file by name, but lacking "r" permission, one cannot ls the directory to determine file or directory names. The files themselves have protection -rw-rw---- and are owned by the student, and the student has access to all directories (as other="--x"), so if the student knows the complete path to an existing grading results file, it can be opened and changed!

    Currently the protections on directories are drwxrws---, which prevents this. Even though the grading files are owned by the student, the student has no ability to get to them because intervening directory protections prevent this. However, the grading assistants -- with group "grade111" privilege -- can get to them.

    Most of the code for provide is a matter of cleverly manipulating the linux protection system.

    Caveats and comments: The "s" in the directory protection word indicates group inheritance, which simply enforces the idea that all contained files and directories are in group "grade111". The student in question was not a member of that group, so that the fact that group membership is inherited is not relevant to the problem. It is also not a typo.

    Contrary to what some people seemed to believe,

    The only thing the student can do is to change pre-existing files. The student cannot delete a grading file or add one, because the directory permission other="--x" does not give sufficient permission: the student would need permission "-w-" on a directory to create a new file in it.

    The student was quite bright to have figured this out, but not so bright to be discussing this in front of the author of provide! I did not bring the student up on disciplinary charges, because the student didn't actually change grades, and knowing how to do so is not a disciplinary offense, any more than knowing how to rob someone would make one guilty of robbery!

    Scoring: +10 if acceptable; +8 if minor omissions; +5 if major errors.

  5. Consider the code:
    main() { printf("hi"); fork(); write(fileno(stdout), "ho", 2); } 
    What does this print on stdout? Why?

    Answer: The first "hi" is buffered before the fork and remains buffered after the fork in both copies. The second "ho" is unbuffered and written immediately in both processes. This does not flush the stdout buffer! Then the two forks exit and the "hi"s are flushed from the buffers. Thus each side of the fork writes a sum total of "hohi" to the disk, where the writes of "ho" and "hi" are atomic but the write of "hohi" is not. Thus, depending upon scheduling, the result can be "hohihohi" or "hohohihi".

    Caveats and comments: Only one person got this completely correct! Many people forgot that:

    Scoring: +10 if acceptable; +8 if didn't realize there are two outcomes; +5 if misunderstood one of buffering or fork behavior ( resulting in "hihohiho", "hohi", or "hihoho"); +2 if a less sensible answer ("hiho" or similar).

  6. There are actually many forms of operating system virtualization. One of the newer forms that is quite useful for software development is the "Docker" system for application virtualization. A linux user can run several autonomous copies of the same linux OS in "Docker containers" that are owned by that user and share many of their resources with the linux "Host OS" that runs the containers as ordinary linux processes. Thus, one can easily and efficiently simulate a multi-host linux system on a single host.

    (I am a very enthusiastic user of this tool.)

    Each container can use a filesystem that is actually a directory owned by the user on the Host OS. How can a Container OS use a directory on the disk of the host OS as its filesystem? What modifications to the Container OS are necessary compared to the Host OS? Hint: the Host OS is not modified; this is one of the reasons Docker is so useful!

    Answer: The filesystem driver in the docker container does not contact the raw disk driver, but instead, contacts the filesystem driver of the host OS and treats the chosen directory as its '/'. Since it is the filesystem driver that defines what '/' means, this is trivial. The guest filesystem driver also maps "root" privilege in the container to regular user privilege in the host OS. Thus, the directory is actually owned by the user but treated in the guest as if it is owned by 'root', as is required of a linux instance.

    This is the only reasonable way to do this; anything else is much more complex.

    FYI: What Docker does is actually considerably more clever than this; the guest filesystem driver utilizes parts of the regular linux filesystem and overlays them on the user directory to save space. Thus, whatever it needs from the regular filesystem is borrowed from the host OS. Only the changes from the Host OS are reflected in the user directory. But this is the job of the guest filesystem driver and is nothing more than an enhancement of the basic strategy.

    Caveats and comments: The filesystem driver in the container -- running inside a regular user process in an unmodified OS -- simply cannot contact the raw disk driver at all. It does not have the privilege.

    Many people said approximately what the container should do, but stopped short of the real modification -- of the filesystem driver -- that is needed in order to do it. This was not enough detail.

    Several other misconceptions I encountered:

    Scoring: +10 if acceptable; +8 if would work but less practical than guest driver modification; +5 if minor misconceptions or omissions; +2 if major misconceptions.

  7. A web application receives requests from the outside world, requests and reads answers from a database, and sends results back out to requesters.
    1. Suppose that the web application is in balance. If the web application receives an average of 100 requests/minute, and takes an an average of 2 seconds to answer each request, what is the average number of requests in system?

      • W = average of 2 seconds;
      • λ=(100 requests/minute) * (1 minute/60 seconds) = average of 100/60 requests/second;
      • L = λ W = (100/60 requests/second) * (2 seconds) = average of 10/3 requests in system.

      Caveats and comments: The most common mistake was to fail to convert units to be commensurate. One cannot utilize L = λ W unless λ is in units of form arrivals/X and W is in units of X, where X is a time unit: seconds, minutes, hours, days, etc. It doesn't matter how one makes units commensurate. In this particular case, above, I had W in seconds and thus chose to express λ in arrivals/second. Had I expressed both in arrivals/minute and minutes, respectively, the answer would have been exactly the same.

      I think that the easiest way to remember Little's law units is the following mnemonic form:

      (jobs in system) = (arrivals / second) * (seconds in system).

      For some reason I cannot understand a small number of people thought the system was an M/M/1 queueing system. It is not M/M/1 because there are no statistical distribution assumptions given.

      Scoring: +10 if acceptable; +5 if units were not converted or if there was some other major misunderstanding.

    2. Supposing the same balance situation as the above, and that a database request takes an average of 1/2 second, how many database requests are pending at a time, on average? Why? Hint: you might want to draw a picture of the chain of processing, including receipt of query, database query, and post-processing after the query. Use this picture to reason about the answer.

      Answer: Consider the system as a chain of actions
        λ                   λ                     λ                    λ
      ----- preprocessing ----- database lookup ----- postprocessing -----
    Since the system is in balance, the arrival rates are all the same. Thus, for the database one can draw a box around the database lookup and then: Obviously, the requests in system for a subcomponent must be less than or equal to the total requests in system for the component.

    Caveats and comments: Since the total time in system is 2 seconds, the time for the database request cannot exceed 2 seconds in any case, but the requests in system can indeed be different; the only thing that must remain the same is the arrival rate and exit rate to/from each box in the system. By Little's law, the average requests in system for any subsystem are going to be less than the average total requests in system.

    Scoring: +10 if acceptable; +7 if obtained the answer and then went on to make errors; +5 if units were not converted (see answer to previous problem for detailed discussion of units).

  8. Many web services run on linux instances (e.g., in Docker!) with a different process scheduler than the default. How is the scheduler different, and why? Hint: there are no regular users who log into the system; just the web services.

    Answer: Web service requests have no interactive component so that the concept of fairness is irrelevant. One needs to maximize throughput. Hence one needs to minimize context switches. Thus the scheduler can be a pure batch scheduler that does not interrupt processing of a request unless it blocks for some reason. The result maximizes throughput.

    Caveats and comments: While it is true that a web server is I/O bound, it is not user I/O; it is disk I/O! Fairness only applies to user I/O.

    Scoring: +10 for batch, run-until-blocked; +8 for any answer close to batch but not quite mentioning it by name; +5 for round-robin, O(1), CFS, and other approximately "fair" schedulers; +2 for more outlandish answers, e.g., long-term scheduling or real-time scheduling.

  9. I had a programming problem last Friday (for real!) concerning the following code:
    void record_request(char *request) { 
        if (! request_added(request)) { 
    When executed in multiple threads, often duplicate requests were created even though the code obviously seems to "prevent" that. Exactly what went wrong, and how would you fix this?

    (This was inside a user library, and not in my code!:)

    Answer: This is a classic check/write race, in which the following schedule creates the problem:
    First threadSecond thread
    if (! request_added(request))
    if (! request_added(request))
    The fix is to surround the whole block with a mutex lock:

    void record_request(char *request) { 
        pthread_mutex_lock(& request_lock); 
        if (! request_added(request)) { 
        pthread_mutex_unlock(& request_lock); 

    Caveats and comments: More or less everyone figured this out, which is gratifying.

    It is very amusing to me that one student actually identified the library method in question, which is the get_or_create method in the Django python framework! The student had the same problem! To avoid changing the library method, I actually surrounded the call to the method with a lock, as follows:

    with transaction.atomic():
        myNewRecord = MyClass.objects.get_or_create(...record contents...) 
    This is the Django way to lock a critical section, and is equivalent with surrounding the call with a mutex lock (but has other features I didn't use). For more details on transaction.atomic() and its relatives and many uses, stay tuned for COMP115: Databases.

    Scoring: +10 if acceptable; +8 if vague answer; +5 if remedy doesn't work.

  10. (10 points EXTRA CREDIT) There are two different concepts in database systems: atomicity and serialization. Atomicity is just what we learned about here; requests are atomic with respect to one another. Serialization refers to the fact that when multiple requests arrive, they are never done in parallel; they are always done one by one. Are these equivalent or different concepts, and why?

    Answer: Serialization is a mechanism for making otherwise non-atomic requests behave somewhat like they're atomic. This is what surrounding critical sections with the same mutex lock and unlock actually does.

    Serialized operations need not be atomic. Serialized operations are also atomic only if:

    1. The serialized operations are not interruptable by signals (due to signal masking) and
    2. Every possible operation that can affect the structure in question is serialized with respect to the same lock.

    Likewise, atomic operations need not be serialized (though they are always serializable at some cost). Consider, e.g., writes to different files. These can occur in parallel, because there is no conflict between them. The writes are still atomic, but serialization of the non-conflicting writes would impede parallel execution without adding anything valuable.

    Caveats and comments: In Database theory, serialization and atomicity are equivalent, because of an extenuating assumption: isolation. Isolation means that the system is free from effects other than those arising from requests. Thus, the above conditions are satisfied. In operating systems -- without the isolation assumption -- serialization and atomicity remain slightly different concepts.

    I accepted the statement that atomic implies serialized, though the correct statement is that atomic implies serializable, in the sense that this is not always advantageous from a performance standandpoint for reasons discussed above.

    Scoring: +10 if acceptable; +5 if finer distinctions are not made.