Comp111: Operating Systems
Classroom Exercise 7 answers
Fall 2017

In class we have discussed the basic pattern of thread execution, including shared memory. The exercise refers to the code:

int x=0; 
void *threaded_routine (void * v) { 
    const int *n = (int *)v; 
    int i; 
    for (i=0; i<10; i++)  { 
        int y=x; y++; sleep(1); x=y; 
        printf("%d: x=%d\n",*n,x); 
  1. What happens when we execute 10 instances of this code in parallel, in shared memory threads?
    Answer: The exact same thing as the class example happens. The output is 10 even though 100 increments are made, because the stack variable y is the same for each instance at the same time.
  2. What happens if we instead surround the whole routine with pthread_mutex_lock and Pthread_mutex_unlock before executing 10 instances?
    Answer: The output is 100 -- after 100 seconds -- because the sleeps happen serially!
  3. What would happen if we removed the sleep call and executed 10 instances?
    Answer: The result becomes rather non-deterministic, in that there is a race between threads to set x. When they win, x gets incremented; when they lose, x does not get incremented.
  4. Suppose that the sleep has been removed. Where would you put the pthread_mutex_lock and pthread_mutex_unlock so that all increments would be preserved while completing multiple threads as quickly as possible?
    Answer: Put them around the line in which y is incremented. Then all is well.
  5. (Advanced) I claim that in using locks, one thread running alone will never call the kernel even though it uses pthread_mutex_lock and pthread_mutex_unlock. Why?
    Answer: If there is only one thread, there will never be a reason for the thread to block. It can lock the mutex in user space and unlock it in user space. The only reason to call the kernel is to block a thread.
  6. (Advanced) Locking mechanisms can be strong or weak. A strong locking mechanism always resumes locked threads in the order in which they blocked. A weak locking mechanism does not. Why is strong locking more expensive to implement?
    Answer: Strong locking requires that we keep an order of arrival of threads at the lock. Since we have to call the kernel to wake a thread from being blocked, this queue is usually stored in the kernel itself and the kernel handles sequencing.

    By contrast, weak locking doesn't require keeping any kind of a list; the threads can repeatedly poll for when the lock is available. Thus, from the point of view of the kernel, they never block! This wastes a lot of CPU, which is why no one uses it.