Comp150CPA: Clouds and Power-Aware Computing
Classroom Exercise 7
Load and response time
Spring 2011

group member 1: ____________________________ login: ______________

group member 2: ____________________________ login: ______________

group member 3: ____________________________ login: ______________

group member 4: ____________________________ login: ______________

group member 5: ____________________________ login: ______________

In class we have studied the linkage between request service time and the concept of load average. Let's explore that issue in more detail.

  1. Suppose that in 10 time intervals, there are two processes running. Process 1 computes during intervals 3 and 4, while process 2 starts at interval 4 and computes during intervals 5, 6, and 7. What is the load average over 10 intervals?
    Answer: There are a total of 5/10 cycles spent computing and one cycle spent ready.
         1   2   3   4   5   6   7   8   9   10
    P1 +---+---+CCC+CCC+---+---+---+---+---+---+
    P2 +---+---+---+RRR+CCC+CCC+CCC+---+---+---+
    so the total load average is 0.6.
  2. Suppose you have 10 time intervals on a single-core machine wherein three processes are running. Assume that for any time interval, a single process is either be computing, ready to run, or waiting for something else. Give a time-space diagram with three processes on Y and 10 time intervals on X that depicts a situation with a load average of 1.5 over the 10 time intervals.
    Answer: We want the total number of computing or ready cycles to be 15, out of a total of 30, so anything like:
    will suffice, where C=Computing, R=Ready, -=Waiting. Note that Ready must be followed by Computing; there is no transition from Ready to Waiting without Computing. Note also that only one process can be "Computing" at a time.
  3. Depict your schedule from problem 2 again, but this time, simulate what would happen if you replaced your 1-core machine with a 2-core machine; in other words, two processes can be "computing" at a time. What is the new load average?
    Answer: The key issue is that things that are runnable just run, while shifting computation to the left (and back) in time:
    The new load average = 10/10 = 1.0 Note that it is not precisely 1/2 of the original load average because of the small sample.
  4. Depict your schedule from problem 2 again, but this time, depict what would happen running on a virtual server with 50% of the physical server (half the time, another operating system instance is running). What is the load average now?
    Answer: The start times and times that things become runnable stay the same but the computing times stretch out: one of them takes two cycles:
    In this case, the first phase gets stretched out and doesn't complete, with the result that the end of the original computation doesn't fit into 10 cycles. The load average (from this instance's point of view only) = 24/10=2.4. Note that it is not precisely twice because of the small sample.
  5. (Advanced) In class we noted that load L is "roughly" proportional to service time W. The reason we did not claim exact proportionality is that operating systems do other things -- that we might call "overhead" -- than just responding to service. Adding overhead to the picture, what is the real relationship between L and W?
    Answer: Our "approximation" was that W = waiting time = L/λ, where λ is an unknown constant of proportionality. But that equation assumed that the whole time of any time interval T is spent in performing service.

    For a specific interval of time T, split total time T = toverhead + tservice, where toverhead is the amount of time spent serving the operating system, while tservice is the total time spent serving clients. Then tservice/T is the proportion of time spent working on providing service, while toverhead/T is the proportion of time spent on operating system issues.

    Then note that this is exactly the same situation as having multiple virtual servers, counting the operating system itself as a "virtual service". The equation W = L/λ only applies to the amount of time spent on our service. During overhead time, all instances of our service have to wait, whether they were computing, ready, or idle. Overhead is "blind" to what we happen to be doing, so it increases all times equally.

    Thus the wait time W must be adjusted for overhead, so that on average, W = L/λ/(toverhead/T).