### Comp111: Operating Systems Classroom Exercise 16 Scheduling and Little's laws Fall 2018

#### group member 1: ____________________________ login: ______________ group member 2: ____________________________ login: ______________ group member 3: ____________________________ login: ______________ group member 4: ____________________________ login: ______________ group member 5: ____________________________ login: ______________ group member 6: ____________________________ login: ______________ group member 7: ____________________________ login: ______________ group member 8: ____________________________ login: ______________

In class we have studied the basic model of scheduling as well as "Little's laws" that govern all scheduling. Recall Little's law, which is that for a system in steady state, L = λ W, where

• L is the average number of "requests in system" at a time.
• λ is the average arrival (and exit) rate.
• W is the average total time-in-system for a thing (entrance-to-exit).
1. There are 10 processes in the run queue of a processor, all at equal priority, we apply round-robin scheduling, and a scheduling slice is 100ms.
1. How long does it take for a process to obtain a slice in which to run?

2. Suppose that all 10 processes are compute-bound. Is the round-robin scheduler fair? Why or why not?

3. Now suppose that 9 of 10 processes are compute-bound and one process (emacs) is I/O bound and receives one character per second of user typing, that it processes for 10ms and then blocks again waiting for the next. Is this unfair? Is this an undesirable situation for the user of the I/O bound process?
2. In a cloud computing scenerio, you have a web server that is in balance, in the sense that the arrival rate for requests is equal to the average exit (completion) rate for requests.
1. If an average of 10 requests arrive at the server per second and there are an average of 20 requests pending in each server, what is the average time to service one request?

2. How much time does each request spend waiting in the queue before it is serviced?

3. What is the actual processing time used per request?

4. Based upon the answers above, if we want the average total response time to be at most 5 seconds, how many requests can we accept per second?

3. The load average is the average number of processes in the run queue. Suppose we have a load average of 20 on a single core processor, we utilize round-robin scheduling, and the slice for each process is 100ms. How much computation does each process do per second?