Comp111: Operating Systems
Classroom Exercise 13 Answers
Scheduling and Little's laws
Fall 2017
In class we have studied the basic model of scheduling as well
as "Little's laws" that govern all scheduling. Recall Little's law,
which is that for a system in steady state, L = λ W, where
 L is the average number of "requests in system" at a time.
 λ is the average arrival (and exit) rate.
 W is the average total timeinsystem for a thing (entrancetoexit).
 There are 10 processes in the run queue of a processor, all at equal priority, we apply roundrobin scheduling, and a scheduling slice is 100ms.
 How long does it take for a process to obtain a slice in which to run?
Answer: Using Little's laws, the scheduling slice is 100ms, so that there is an arrival rate = exit rate of λ = 10 slices per second. Meanwhile, L = 10, so that W = L / &lambda = 10 / 10 seconds = 1 second.
 Suppose that all 10 processes are computebound. Is the roundrobin scheduler fair? Why or why not?
Answer: If all are compute bound, then no process ever blocks from I/O, and thus every process gets its whole slice. This is a fair situation.
 Now suppose that 9 of 10 processes are computebound and one process (emacs) is I/O bound and receives one character per second of user typing, that it processes for 10ms and then blocks again waiting for the next. Is this unfair? Is this an undesirable situation for the user of the I/O bound process?
Answer: Obviously, amongst the 9 processes that are consuming their whole slices, there is fairness. The last process is consistently consuming only 10ms of 100ms = 10% of its slice, which is highly unfair. However, fairness doesn't matter because the 10th process doesn't actually need the cycles it is not getting.

In a cloud computing scenerio, you have a web server that is in balance, in the sense that the arrival rate for requests is equal to the average exit (completion) rate for requests.
 If an average of 10 requests arrive at the server per second and
there are an average of 20 requests pending in each server, what is the average time
to service one request?
Answer: L = 20, &lambda = 10, W = L / λ = 20 / 10 seconds = 2 seconds.
 How much time does each request spend waiting in the queue before it is serviced?
Answer: We put a box around the queue in the queue diagram as done in lecture. For this box, L = 19, λ = 10, and W = L / λ = 19/10 seconds = 1.9 seconds.
 What is the actual processing time used per request?
Answer: By the same argument, the processing time is 1/10 second.
 Based upon the answers above, if we want the average total response time to be at most 5 seconds, how many requests can we accept per second?
Answer: This is a bit tricky. We have computed that the actual
processing time is 1/10 second, so we can also conclude that the processing time
for a request is L * 1/10 second = L/10. Thus L * 1/10 second = 5, or L = 50.
Now we have to figure out the rate λ matching this L.
We know W = 5 and L = 50, so λ = L / W = 50 / 5 = 10 per second.
 The load average is the average number of processes in the run queue. Suppose we have a load average of 20 on a single core processor, we utilize roundrobin scheduling, and the slice for each process is 100ms. How much computation does each process do per second?
Answer: L = 20, W = 20 * 100 ms = 2 seconds. Each process gets W / L = 2/20 = 1/10 seconds per epoch. Each epoch is 2 seconds. Thus each process gets 1/10 second per epoch / 2 seconds per epoch = 1/20 second average time per second.