# Concurrent Objects

## Sequential vs Concurrent

Sequential:

• Object needs meaningful state only between method calls
• Each method described in isolation
• Can add new methods without affecting older methods

Concurrent:

• Method call is not an event, but an interval
• Method calls could overlap, object might never be between method calls （意思是指：可能不存在没有overlap的时间点）
• Must characterize all possible interactions with concurrent calls
• Everything can potentially interact with everything else

## Linearizability

• Each method should "take effect" instantaneously between invocation and response events
• Object is correct if this "sequential" behaviour is correct
• A linearizable object: an object all of whose possible executions are linearizable

In some cases, linearization point depends on the execution.

（上图中不可能读到y，肯定不可能线性化）

## Formal Model of Executions

Split method calls into two events:

• Invocation
• thread, method name and arguments
• A q.enq(x)
• Response
• result or exception
• A q.enq(x) returns A q:void
• A q.deq() returns A q:x
• B q.deq() throws B q:empty

A sequential history $$H$$ is legal if: for every object $$x$$, $$H \vert x$$ is in the sequential specification for $$x$$ (i.e. is legal).

• Given history $$H$$ and method executions $$m_0$$ and $$m_1$$ in $$H$$
• We say $$m_0 \rightarrow_H m_1$$ if $$m_0$$ precedes $$m_1$$ (i.e. the response event of $$m_0$$ precedes invocation event of $$m_1$$)
• Relation $$\rightarrow_H$$ is a
• partial order
• total order if $$H$$ is sequential

History $$H$$ is linearizable if it can be extended to $$G$$ by

• Appending zero or more responses to pending invocations
• Discarding other pending invocations

so that $$G$$ is equivalent to

• legal sequential history $$S$$
• where $$\rightarrow_G \subseteq \rightarrow_S$$
• means that $$S$$ respects “real-time order” of $$G$$

Composability Theorem: History $$H$$ is linearizable if and only if for every object $$x$$, $$H \vert x$$ is linearizable.

### Proof Strategy

Identify one atomic step where method "happens" (linearization point):

• Critical section?
• Machine instruction?

Does not always work: might need to define several different points for a given method.

## Sequential Consistency

History $$H$$ is sequentially consistent if it can be extended to $$G$$ by

• Appending zero or more responses to pending invocations
• Discarding other pending invocations

so that $$G$$ is equivalent to

• level sequential history $$S$$

In sequential consistency, there is no need to preserve real-time order.

• Cannot re-order operations done by the same thread
• Can re-order non-overlapping operations done by different threads

Sequential consistency is often used to describe multiprocessor memory architectures.

E.g. 在下图中，可以调换顺序让红色线程比蓝色线程先执行，这样整个操作就是SC的了。

Theorem: Sequential Consistency does not have composability.

## Synchronization

On modern multiprocessors, processors do not read and write directly to memory. Instead, each processor reads and writes directly to a cache.

When we want to write to memory and to synchronize, we need to announce it explicitly and pay the price:

• Memory barrier instruction
• Flush unwritten caches
• Bring caches up to date
• Expensive
• Volatile registers
• Keep a variable up to date
• Inhibits reordering, removing from loops and other optimizations

## Progress

Progress conditions:

• Deadlock-free: some thread trying to acquire the lock eventually succeeds.
• Starvation-free: every thread trying to acquire the lock eventually succeeds.
• Lock-free: some thread calling a method eventually returns.
• Wait-free: every thread calling a method eventually returns.