Name: ________________________________ Login: _____________
Comp111 Final Exam
Dec 19, 2012 - open books and notes
No electronic devices of any kind allowed, including cellphones,
calculators, pagers, etc. Please turn these off before beginning
Please show all work on these pages. Feel free to use the back of
each page as needed. Make sure that your work for each problem is
clearly marked, especially on the backs of pages. These pages will be
graded and returned electronically, like exercises, and the results
will be made available online.
A web service accepts requests at an average rate of 100 requests/second.
The response time is an average of 1/10 second per request.
- (10 points) How many requests are queued at a time, on average?
Please show your work.
Answer: According to Little's Law, L = λW = 100* (1/10) = 10 requests in queue.
Suppose that you leave everything else constant, and double the
speed of the CPU on the computer.
How does the answer to the first part of this problem change?
Is this answer an upper bound, a lower bound,
or an exact estimate of average queue size? Why?
Answer: As a naive estimate (whose inaccuracy will be explained below),
the processing time halves. If all else is held constant, this means that
it will take 1/20 of a second per request. Thus L = λ W = 100*(1/20) = 5
requests in queue. However, this estimate isn't perfect; there is
the potential for conflict between requests in consuming resources.
Due to the potential for conflicts, the estimate is a lower bound on
average queue size.
- Suppose that you start over from the first part of the problem, and reject 1/2 of the requests through admission control. What is the average response time? Is this an upper bound, a lower bound, or an exact estimate?
Answer: The effective arrival rate is 50 requests/second.
But we have only measured the time in system for double that rate,
so Little's law does not apply verbatim. Making the rash estimate
that the time in system halves for 1/2 the load, it is approximately
1/20 second. But this is an over-estimate, because there is less opportunity
for resource conflicts at the lower load.
The most expensive disk drives now have built-in flash journals that
buffer data before writing it to the disk.
- (10 points) What shortcomings of writing the journal directly to disk does
a flash journal address?
Answer: There is no seek time involved in writing the flash journal, and no rotational delay, so it is (on average) faster to write to the flash
- (10 points) Does this flash require a virtualization layer similar to
those used to implement disk-based filesystems on flash?
Why or why not?
Answer: Because a journal is a sequential store, the flash is
automatically load-leveled by a round-robin writing strategy. No virtualization
layer is necessary.
- (10 points) Should the journal hash table be written to flash or disk? Why?
Answer: The most straightforward answer is that writing the hash table to flash -- at sufficient frequency to maintain integrity -- increases wear on the
flash, and that the disk is a better choice. However, for persistence, it is
a good idea to alternate the blocks on disk to which the journal is written,
so that a power failure will not corrupt the only accurate copy.
- Questions about operating system function:
(10 points) Why is memory for processes allocated in pages rather than in smaller units?
Answer: Otherwise, the descriptors for allocated storage would have to be more numerous and would take up too much memory.
- (10 points)
In scheduling, what is the purpose of organizing schedules into
fixed-size "epochs"? Which kinds of scheduling need epochs, and
which kinds do not?
Answer: The "epoch" concept is only relevant to scheduling
that is in some way "fair". The purpose of the epoch is as a unit over
which fairness will be assured.
- (20 points) At the beginning of the course, I indicated that
operating systems try to utilize algorithms that run in constant time,
i.e., the runtime does not vary with problem size or number of
entities being managed. Give several examples of how running in
constant time influences operating system design, and then at least
one example where we violated the principle in order to achieve
better OS functionality and/or features.
Answer: Examples of constant time constraints include
Examples in which we have violated that assumption include:
- The implementation of malloc: trades space for time, and implements
contant time malloc() and free().
- The O(1) scheduler: keeps simple queues.
- The implementation of a pipe as a fixed-size ring buffer.
Enqueue and dequeue are constant-time operations.
- Hashing of sparse spaces.
- Cacheing of dense spaces.
- ... and many others ...
- The Banker's Algorithm: worst case is O(n) for one change but -- on average -- takes constant time for one change.
- Completely Fair Scheduling: O(log n) for one change, where n is the number of processes.
- Lock prioritization: potential for O(n) delay where n is the number of locks to be acquired.