## OSTEP [虚拟] Chapt. 07-11 - 调度算法

2019-04-16 20:53 CST
2019-08-20 21:27 CST
CC BY-NC 4.0

1. 每个任务可以执行相同的时间；
2. 所有任务同时到来；
3. 一旦开始，任务一直执行到结束；
4. 所有任务只使用CPU（不进行I/O操作）；
5. 每个任务的运行时间是相同的。

## 7.2 Scheduling Metrics

Turnaround time是性能评估标准，而不是公平性评估标准。

## 7.3 First In, First Our (FIFO)

FIFO又名First Come, First Served (FCFS)。其具有一些优点：简单、易于实现，且在假设条件下效果不错。

## 7.7 Round Robin

Round Robin(RR)调度算法的想法是：每次花一段时间(time slice/scheduling quantum)执行一个任务，然后切换到队列的下一个任务。因此，RR也被叫做time-slicing

##### TIP: Amortization Can Reduce Costs

The general technique of amortization is commonly used in systems when there is a ﬁxed cost to some operation. By incurring that cost less often (i.e., by performing the operation fewer times), the total cost to the system is reduced. For example, if the time slice is set to 10 ms, and the context-switch cost is 1 ms, roughly 10% of time is spent context switching and is thus wasted. If we want to amortize this cost, we can increase the time slice, e.g., to 100 ms. In this case, less than 1% of time is spent context switching, and thus the cost of time-slicing has been amortized.

## 7.8 Incorporating I/O

##### TIP: Overlap Enables Higher Utilization

When possible, overlap operations to maximize the utilization of systems. Overlap is useful in many different domains, including when performing disk I/O or sending messages to remote machines; in either case, starting the operation and then switching to other work is a good idea, and improves the overall utilization and efﬁciency of the system.

##### TIP: Learn From History

The multi-level feedback queue is an excellent example of a system that learns from the past to predict the future. Such approaches are common in operating systems (and many other places in Computer Science, including hardware branch predictors and caching algorithms). Such approaches work when jobs have phases of behavior and are thus predictable; of course, one must be careful with such techniques, as they can easily be wrong and drive a system to make worse decisions than they would have with no knowledge at all.

## 8.1 MLFQ: Basic Rules

MLFQ有一些不同的队列(queue)，每个都有不同的优先级(priority level)。在任意时刻，一个即将运行的任务只会在一个队列中。MLFQ利用优先级来决定运行谁：选择一个具有更高优先级（位于更高队列中）的任务继续执行。

MLFQ的基本规则为：

• 规则1：如果A的优先级大于B的优先级，运行A；
• 规则2：如果A、B优先级相同，用RR调度它们。

MLFQ根据它观察到的进程行为来区分进程之间的优先级。例如，一个不断主动放弃CPU等待键盘输入的进程会拥有高优先级；而长时间使用CPU计算的进程的优先级会被降低。MLFQ会尝试学习运行的进程的历史，来决定它未来的行为。

## 8.2 Attempt #1: How To Change Priority

• 规则3：当一个程序进入系统，默认优先级是最高的。
• 规则4a：如果程序使用了整段运行时间，降低优先级；
• 规则4b：如果程序在时间结束前主动放弃CPU，保持优先级不变。

MLFQ会先假设程序运行时间短，给予最高优先级；如果假设不成立，优先级会逐步降低，最后MLFQ发现程序的运行时间长，优先级为最低。此时MLFQ的行为更像SJF。

##### TIP: Scheduling Must Be Secure From Attack

You might think that a scheduling policy, whether inside the OS itself (as discussed herein), or in a broader context (e.g., in a distributed storage system’s I/O request handling), is not a security concern, but in increasingly many cases, it is exactly that. Consider the modern datacenter, in which users from around the world share CPUs, memories, networks, and storage systems; without care in policy design and enforcement, a single user may be able to adversely harm others and gain advantage for itself. Thus, scheduling policy forms an important part of the security of a system, and should be carefully constructed.

## 8.3 Attempt #2: The Priority Boost

• 规则5：经过一段时间$S$后，把系统中的所有任务移动到最高优先级队列中。

John Ousterhout, a well-regarded systems researcher, used to call such values in systems voo-doo constants, because they seemed to require some form of black magic to set them correctly.

## 8.4 Attempt #3: Better Accounting

• 规则4：一旦程序用完了对应优先级允许的运行时间（无论放弃了多少次CPU），都把它移动到次优先级。

## 9.1 Basic Concept: Tickets Represent Your Share

##### TIP: Use Randomness

One of the most beautiful aspects of lottery scheduling is its use of randomness. When you have to make a decision, using such a randomized approach is often a robust and simple way of doing so.

Random approaches has at least three advantages over more traditional decisions. First, random often avoids strange corner-case behaviors that a more traditional algorithm may have trouble handling. For example, consider the LRU replacement policy (studied in more detail in a future chapter on virtual memory); while often a good replacement algorithm, LRU attains worst-case performance for some cyclic-sequential workloads. Random, on the other hand, has no such worst case.

Second, random also is lightweight, requiring little state to track alternatives. In a traditional fair-share scheduling algorithm, tracking how much CPU each process has received requires per-process accounting, which must be updated after running each process. Doing so randomly necessitates only the most minimal of per-process state (e.g., the number of tickets each has). Finally, random can be quite fadt.

As long as generating a random number is quick, making the decision is also, and thus random can be used in a number of places where speed is required. Of course, the fadter the need, the more random tends towards pseudo-random.

## 9.6 Why Not Deterministic?

curr = remove_min(queue);   // pick process with min pass
schedule(curr);             // run for quantum
curr->pass += curr->stride; // update pass using stride
insert(queue, curr);        // return curr to queue


## 9.7 The Linux Completely Fair Scheduler (CFS)

CFS实现了公平调度，并且效率高，可扩展性强。然而这玩意太高级了还用到了红黑树让我当场去世，请点击Wikipedia来学习。