Lecture 4: Adaptive scheduling; threads

Adaptive multilevel queue scheduler

FCFS is simple, but treats I/O bound and CPU bound processes the same, leading to poor responsiveness and long waiting times.

SJF is provably optimal on these fronts, but requires predicting the future, which is impossible to do in general.

An adaptive multi-level queue is a hybrid approach. We estimate the length of the next CPU burst of a process by its past behavior; and run CPU-bound tasks with long quanta and I/O-bound tasks with high priority.

To do this, we maintain a collection of queues ranging in priority from highest to lowest. Each job is run with a quantum. If it consumes its quantum, then it looks like a CPU-bound job, so we decrease its priority by moving it to a lower priority queue: our goal is to quickly service user-facing (I/O-bound) processes. If it does I/O before its quantum expires, it is switched to the waiting state; when it returns from the waiting state, we place it in a higher priority queue so that the input can be serviced quickly (an easy implementation is to always add it to the highest priority queue, another possibility is to move it up by one priority level).

Realtime scheduling

We also briefly discussed real time scheduling. Real time schedulers allow processes to request scheduling guarantees, such as a CPU burst of 10 ms sometimes within the next 100ms. In order to provide these guarantees, the scheduler must perform admission control: it needs the ability to deny requests for resources, and to kill or deschedule processes that attempt to use more resources than requested.

Multiprocessor/multicore scheduling

We have discussed scheduling a single processor. There are a few extra considerations when building a scheduler for a multiprocessor or multicore machine.

Threads

Multiple processes allow programmers to "do multiple things at the same time", but communication between processes is difficult. IPC is possible but invokes system calls and may require copying large amounts of data between processes

Threads are like processes, but they share an address space. Multiple threads within a single process can communicate by simply reading and writing to shared variables in memory.

To create a thread, a programmer simply makes a system call to the kernel to fork a new thread (similar to how one forks a new process). The kernel creates a new Thread Control Block (TCB) to store the state of each thread. The thread control block shares a PCB with the parent thread.

The state of the computation (registers, ready/runing/waiting) is stored in the TCB, while the shared process-level information (VM configuration, permissions) are stored in the shared PCB.

This design is referred to as kernel level threading (or simply "kernel threads"), because the kernel is responsible for managing the TCBs. An alternative design is user-level threading, in which processes manage their own threading, and switch between threads using normal jump instructions inside an application-level scheduler. The Async library used in recent offerings of CS3110 is an example of a user-level threading library.

In order to support user-level threading, the kernel must provide a way for applications to request I/O without being transitioned to the waiting state. This is referred to as non-blocking or asynchronous I/O.

Message passing and shared memory

Communication between processes is usually accomplished by some combination of message passing and shared memory.

With message passing, one process waits for another to explicitly send it a message of some kind. One specific example is fork/join concurrency; where one master process (or thread) forks several helper (or worker or slave) threads to do a part of a computation. The master then waits until all of the helpers have finished, and then combines the results.

With shared memory, processes interact by reading and writing the same variables in memory. Writing concurrent systems with shared memory is much harder than using message passing, but it can be more efficient. Moreover, it is the mechanism that the hardware provides, so we need to understand it to implement message passing services.

The Milk problem

We spent the remainder of class working on the following problem. Suppose we wished to write code for two threads to ensure that after completing its code, a common resource will have been acquired once and only once.

For example, the threads may represent roommates who both wish to use some milk from a shared fridge. If the milk is gone, one of them should run to the store and purchase it, but we should avoid having both roommates purchase milk at the same time.

The tools at our disposal so far: threads share memory, so they can load and store values to shared variables. We can think of this as a shared notepad that the roommates can both read and write on.

Evaluation criteria

Whenever solving synchronization problems, we must consider three criteria:

These criteria are easy to satisfy independently; it is difficult to satisfy all of them together. For example, the following code is safe but not live:
safe but not live

Shared state: (none)

Thread one code:
1: while true:
2:   do nothing
Thread two code:
3: while true:
4:   do nothing
The following is live but not safe:
live but not safe
Shared state: (none)
Thread one code:
1: do nothing
Thread two code:
2: do nothing
The following code is safe and live but not fair:
live and safe but not fair
Shared state:
has_milk = False
Thread one code:
1: while not has_milk:
2:   do nothing
Thread two code:
3: buy_milk()
4: has_milk = True

First attempts

Perhaps the most obvious thing to try is the following:

First attempt (not safe)
Shared state:
has_milk = False
Thread one code:
1: if not has_milk:
2:   buy_milk()
3:   has_milk = True
Thread two code: (same)
4: if not has_milk:
5:   buy_milk()
6:   has_milk = True
  1. thread 1 executes line 1, discovers that has_milk is false, continues to line 2.
  2. a context switch to thread 2 occurs
  3. thread 2 executes line 4, discovers that has_milk is false, so continues to execute lines 5 and 6.
  4. a context switch occurs, returning to thread one, which is about to execute line 2. Thread one executes line 2 and 3

The milk has been bought twice, violating safety.

One idea that was proposed is a "lock variable" that prevents one thread from going out if the other thread is working at all:

second attempt (still not safe)
Shared state:
has_milk     = False
someone_busy = False
Thread one code:
1: while someone_busy:
2:   do nothing
3: someone_busy = True
4: if not has_milk:
5:   buy_milk()
6:   has_milk = True
7: someone_busy = False
Thread two code: (same)
11: while someone_busy:
12:   do nothing
13: someone_busy = True
14: if not has_milk:
15:   buy_milk()
16:   has_milk = True
17: someone_busy = False

The intent is that only one thread can be executing between lines 3 and 7 at a time, because the other threads will notice that there is already someone in the critical section and spin in the loop on lines 1 and 2.

Unfortunately this code is still not safe, because a context switch can occur after a thread finishes line 1 but before it executes line 3. Specifically:

  1. thread one executes line 1. someone_busy is false, so it proceeds to line 3
  2. a context switch occurs; thread 2 is scheduled
  3. thread two executes line 11. someone_busy is still false, so it proceeds to execute lines 13, 14, and 15.
  4. a context switch occurs. Thread one (which was paused at line 3) executes lines 3 and 4. has_milk is still false, so it also executes lines 5, 6 and 7.
  5. a context switch occurs, returning to thread two (which was paused on line 15). It executes lines 15, 16, and 17.

Again, milk has been bought twice, violating safety.

A third proposal was to use an operating-system level lock to do the synchronization for us, perhaps by descheduling the other process:

a third attempt (defines away the problem)
Shared state:
has_milk     = False
Thread one code:
1: system_call_to_force_thread_2_to_wait()
2: if not has_milk:
3:   buy_milk()
4:   has_milk = True
5: system_call_to_wake_up_thread_2()
Thread two code: (symmetric)
11: system_call_to_force_thread_1_to_wait()
12: if not has_milk:
13:   buy_milk()
14:   has_milk = True
15: system_call_to_wake_up_thread_1()

However, since this is 4410, we can't just assume that our operating system magically works. If we think about how we would implement this, the system call handler for the system_call_to_force_thread_to_wait must solve a similar synchronization problem: access to the shared ready and waiting queues and TCBs needs to be carefully coordinated. This can be done on a single processor machine by disabling interrupts or programming the ready and waiting cues very carefully, but to solve the problem on a multiprocessor machine will require us to solve an equivalent problem to the original problem.

I've asked you to think about the milk problem, we'll give a working solution tomorrow.