程序设计语言的形式语义

6 - Separation Logic

2019-12-06 10:00 CST
2019-12-20 11:13 CST
CC BY-NC 4.0

$\global\def\sepimp{\mathrel{-\mkern-6mu*}}$ Separation Logic: extension of Hoare Logic for reasoning about pointers

  • Low-level programming language
    • Extension of simple imperative language
    • Commands for allocating, accessing, mutating and deal-locating data structures
    • Dangling pointer faults
  • Program specification and proof
    • Extension of Hoare Logic
    • Separating conjuction ($*$) and implication ($\sepimp$)
  • Inductive definitions over abstract structures

Programming Language

New commands:

  • allocation $\textbf{cons}$
  • deallocation $\textbf{dispose}$
  • dereference $x := [e]$
  • mutation $[e] := e$

New states:

  • Atoms: subset of Integers, disjoint with addresses $$\textbf{nil} \in \text{Atoms}$$
  • Addresses: subset of Integers
  • Stores: a mapping from variables $V$ to values
  • Heap: a finite mapping from addresses to values

Notes:

  • Expressions depend only upon the store.
    • No side effects or nontermination.
    • $\text{cons}$ and $[-]$ are parts of commands.
  • Allocation is nondeterminate. (May allocate to different addresses.)

Memory faults: can be caused by out-of-range lookup or deallocation.

Operational Semantics

  • $s$: store (variable -> value)
  • $h$: heap (address -> value)

$${h(\llbracket e \rrbracket_{intexp} s) = n \over (x:=[e], (s,h)) \longrightarrow (\textbf{skip}, (s \lbrace x \leadsto n \rbrace, h))}$$

$${\llbracket e \rrbracket_{intexp} s \notin \text{dom}(h) \over (x:=[e], (s,h)) \longrightarrow \textbf{abort}}$$

$${\llbracket e \rrbracket_{intexp} s = l \ \ \ \ l \in \text{dom}(h) \over ([e] := e^\prime, (s,h)) \longrightarrow (\textbf{skip}, (s, h\lbrace l \leadsto \llbracket e^\prime \rrbracket_{intexp} s \rbrace))}$$

$${\llbracket e \rrbracket_{intexp} s \notin \text{dom}(h) \over ([e]:=[e^\prime], (s,h)) \longrightarrow \textbf{abort}}$$

$${\llbracket e_1 \rrbracket_{intexp} s = n_1 \ \ \ \ \llbracket e_2 \rrbracket_{intexp} s = n_2 \ \ \ \ \lbrace l, l+1 \rbrace \cap \text{dom}(h) = \emptyset \over (x := \textbf{cons}(e_1,e_2), (s,h)) \longrightarrow (\textbf{skip}, (s\lbrace x \leadsto l \rbrace, h\lbrace l \leadsto n_1, l+1 \leadsto n_2 \rbrace))}$$

Assertions

Standard predicate calculus:

$$\wedge \ \ \ \ \vee \ \ \ \ \neg \ \ \ \ \Rightarrow \ \ \ \ \forall \ \ \ \ \exists$$

plus:

  • $\textbf{emp}$: empty heap
  • $e \mapsto e^\prime$: singleton heap (contains one cell)
  • $p_1 * p_2$: separating conjunction (heap can be split into two disjoint parts)
  • $p_1 \sepimp p_2$: separating implication (if the heap is extended with a disjoint part in which $p_1$ holds, then $p_2$ holds for the extended heap)

Abbreviations:

  • $e \mapsto - \overset{\Delta}{=} \exists x^\prime.\ e \mapsto x^\prime$ where $x^\prime$ not free in $e$
  • $e \hookrightarrow e' \overset{\Delta}{=} e \mapsto e^\prime * \textbf{true}$
  • $e \mapsto e_1, \dots, e_n \overset{\Delta}{=} e \mapsto e_1 * \cdots * (e+n-1) \mapsto e_n$
  • $e \hookrightarrow e_1, \dots, e_n \overset{\Delta}{=} e \hookrightarrow e_1 * \cdots * (e+n-1) \hookrightarrow e_n$ iff $e \mapsto e_1, \dots, e_n * \textbf{true}$

Examples of Separating Conjunction:

  1. $x \mapsto 3, y$ asserts that $[ x ]=3$, $[x+1]=y$
  2. $y \mapsto 3, x$ asserts that $[ y ]=3$, $[y+1]=x$

Examples of Conjunctions

  1. $(x \mapsto 3, y) * (y \mapsto 3, x)$ asserts that (1) and (2) holds for separate parts of the heap
  2. $(x \mapsto 3, y) \wedge (y \mapsto 3, y)$ asserts that (1) and (2) holds for the same heap, happens only if $x = y$
  3. $(x \hookrightarrow 3, y) \wedge (y \hookrightarrow 3, x)$ asserts that either (3) or (4) may hold, and that the heap may contain additional cells

Example of Separating Implication:

  • Suppose $p$ holds for
    • Store: $x : \alpha, \dots$
    • Heap: $[\alpha] = 3, [\alpha+1] = 4, \dots$
  • Then $(x \mapsto 3, 4) \sepimp p$ holds for
    • Store: $x : \alpha, \dots$
    • Heap: no requirements (why?)
  • and $(x \mapsto 1, 2) * ((x \mapsto 3, 4) \sepimp p)$ holds for
    • Store: $x : \alpha, \dots$
    • Heap: $[\alpha] = 1, [\alpha+1] = 2, \dots$

Functions

  • $[x_1 : y_1 | \dots | x_n : y_n]$: function with domain $\lbrace x_1, \dots, x_n \rbrace$ that maps each $x_i$ to $y_i$.
  • $[f | x_1 : y_1 | \dots | x_n : y_n]$: function whose domain is the union of the domain of $f$ and $\lbrace x_1, \dots, x_n \rbrace$ that maps $x_i$ to $y_i$ and others to $f\ x$.

For heaps:

  • $h_0 \perp h_1$: $h_0$ and $h_1$ have disjoint domains
  • $h_0 \cdot h_1$: the union of heaps with disjoint domains

The Meaning of Assertions

  • $s$: store
  • $h$: heap
  • $p$: assertions whose free variables belong to the domain of $s$

$s, h \vDash p$ indicates that the state $s, h$ satisfies $p$, or $p$ is true in $s, h$, or $p$ holds in $s, h$.

  • $s, h \vDash b$ iff $\llbracket b \rrbracket_{boolexp} s = \textbf{true}$
  • $s, h \vDash \neg p$ iff $s, h \vDash p$ is false
  • $s, h \vDash p_0 \wedge p_1$ iff $s, h \vDash p_0$ and $s, h \vDash p_1$ (similarly for $\vee, \Rightarrow, \dots$)
  • $s, h \vDash \forall v.\ p$ iff $\forall x \in \mathbb{Z}.\ [s | v:x], h \vDash p$
  • $s, h \vDash \exists v.\ p$ iff $\exists x \in \mathbb{Z}.\ [s | v:x], h \vDash p$
  • $s, h \vDash \textbf{emp}$ iff $\text{dom }h = \lbrace \rbrace$
  • $s, h \vDash e \mapsto e^\prime$ iff $\text{dom }h = \lbrace \llbracket e \rrbracket_{exp} s \rbrace$ and $h(\llbracket e \rrbracket_{exp} s) = \llbracket e^\prime \rrbracket_{exp} s$
  • $s, h \vDash p_0 * p_1$ iff $\exists h_0, h_1.\ h_0 \perp h_1$ and $h_0 \cdot h_1 = h$ and $s, h_0 \vDash p_0$ and $s, h_1 \vDash p_1$
  • $s, h \vDash p_0 \sepimp p_1$ iff $\forall h^\prime.\ (h^\prime \perp h \text{ and } s, h^\prime \vDash p_0)$ implies $s, h \cdot h^\prime \vDash p_1$

When $s, h \vDash p$ holds for all states $s, h$, we say that $p$ is valid.

When $s, h \vDash p$ holds for some state $s, h$, we say that $p$ is satisfiable.

Examples:

  • $s, h \vDash x \mapsto y$ iff $\text{dom } h = \lbrace s\ x \rbrace$ and $h(s\ x) = s\ y$
  • $s, h \vDash x \mapsto -$ iff $\text{dom } h = \lbrace s\ x \rbrace$
  • $s, h \vDash x \hookrightarrow y$ iff $s\ x \in \text{dom } h$ and $h(s\ x) = s\ y$
  • $s, h \vDash x \hookrightarrow -$ iff $s\ x \in \text{dom } h$
  • $s, h \vDash x \mapsto y, z$ iff $h = [s\ x : s\ y | s\ x + 1 : s\ z]$
  • $s, h \vDash x \mapsto -, -$ iff $\text{dom } h = \lbrace s\ x, s\ x + 1 \rbrace$
  • $s, h \vDash x \hookrightarrow y, z$ iff $h \supseteq [s\ x : s\ y | s\ x + 1 : s\ z]$
  • $s, h \vDash x \hookrightarrow -, -$ iff $\text{dom } h \supseteq \lbrace s\ x, s\ x + 1 \rbrace$

Inference Rules for Assertions

Inference Rules ($\mathcal{P}$ stands for premise): $${\mathcal{P}_1 \ \ \ \ \dots \ \ \ \ \mathcal{P}_n \over \mathcal{C}}$$

Note on soundness:

$\dfrac{p}{q}$ is sound iff for all instances, if $p$ is valid, then $q$ is valid.

$\dfrac{}{p \Rightarrow q}$ is sound iff for all instances, $p \Rightarrow q$ is valid.

Inference rules for $*$ and $\sepimp$:

  • $p_0 * p_1 \Leftrightarrow p_1 * p_0$
  • $(p_0 * p_1) * p_2 \Leftrightarrow p_0 * (p_1 * p_2)$
  • $p * \textbf{emp} \Leftrightarrow p$
  • $(p_0 \vee p_1) * q \Leftrightarrow (p_0 * q) \vee (p_1 * q)$
  • $(p_0 \wedge p_1) * q {\ \red\Rightarrow\ } (p_0 * q) \wedge (p_1 * q)$
  • $(\exists x.\ p_0) * p_1 \Leftrightarrow \exists x.\ (p_0 * p_1)$ where $x$ not free in $p_1$
  • $(\forall x.\ p_0) * p_1 {\ \red\Rightarrow\ } \forall x.\ (p_0 * p_1)$ where $x$ not free in $p_1$
  • Monotonicity: $${p_0 \Rightarrow p_1 \ \ \ \ q_0 \Rightarrow q_1 \over p_0 * q_0 \Rightarrow p_1 * q_1}$$
  • Currying: $${p_0 * p_1 \Rightarrow p_2 \over p_0 \Rightarrow (p_1 \sepimp p_2)}$$
  • Decurrying: $${p_0 \Rightarrow (p_1 \sepimp p_2) \over p_0 * p_1 \Rightarrow p_2}$$

Two Unsound Axiom Schemata:

  • Contraction: $p \Rightarrow p * p$ (e.g. $p: x \mapsto 1$)
  • Weakening: $p * q \Rightarrow p$ (e.g. $p: x \mapsto 1$ and $q: y \mapsto 2$)

Some Axiom Schemata for $\mapsto$:

  • $(e_1 \mapsto e_1^\prime) \wedge (e_2 \mapsto e_2^\prime) \Leftrightarrow (e_1 \mapsto e_1^\prime) \wedge (e_1 = e_2) \wedge (e_1^\prime = e_2^\prime)$
  • $(e_1 \hookrightarrow e_1^\prime) * (e_2 \hookrightarrow e_2^\prime) {\ \red\Rightarrow\ } e_1 \neq e_2$
  • $\textbf{emp} \Leftrightarrow \forall x.\ \neg(x \hookrightarrow -)$
  • $(e \hookrightarrow e^\prime) \wedge p {\ \red\Rightarrow\ } (e \mapsto e^\prime) * ((e \mapsto e^\prime) \sepimp p)$

Classes of Assertions

1. Pure Assertions

An assertion $p$ is pure iff, for all stores $s$ and all heaps $h$ and $h^\prime$, $$s, h \vDash p \text{ iff } s, h^\prime \vDash p.$$

A sufficient syntactic criteria is that an assertion is pure if it does not contain $\textbf{emp}$, $\mapsto$ or $\hookrightarrow$.

Axiom Schemata for Purity:

  • $p_0 \wedge p_1 \Rightarrow p_0 * p_1$ when $p_0$ or $p_1$ is pure
  • $p_0 * p_1 \Rightarrow p_0 \wedge p_1$ when $p_0$ and $p_1$ are pure
  • $(p \wedge q) * r \Leftrightarrow (p * r) \wedge q$ when $q$ is pure
  • $(p_0 \sepimp p_1) \Rightarrow (p_0 \Rightarrow p_1)$ when $p_0$ is pure
  • $(p_0 \Rightarrow p_1) \Rightarrow (p_0 \sepimp p_1)$ when $p_0$ and $p_1$ are pure

2. Strictly Exact Assertions (Yang)

An assertion is strictly exact iff, for all stores $s$ and all heaps $h$ and $h^\prime$, $$s, h \vDash p \text{ and } s, h^\prime \vDash p \text{ implies } h = h^\prime.$$

Examples of Strictly Exact Assertions:

  • $\textbf{emp}$
  • $e \mapsto e^\prime$
  • $p * q$ where $p$ and $q$ are strictly exact
  • $p \wedge q$ where $p$ or $q$ is strictly exact
  • $p$ where $p \Rightarrow q$ is valid and $q$ is strictly exact

Proposition: When $q$ is strictly exact, $$((q * \textbf{true}) \wedge p) \Rightarrow (q * (q \sepimp p))$$ is valid. (This leads to the final axiom schema for $\mapsto$, i.e. $(e \hookrightarrow e^\prime) \wedge p \Rightarrow (e \mapsto e^\prime) * ((e \mapsto e^\prime) \sepimp p)$.)

3. Precise Assertions

An assertion $q$ is precise iff for all $s$ and $h$, there is at most one $h' \subseteq h$ such that $$s, h' \vDash q.$$

Examples of Precise Assertions:

  • all strictly exact assertions
  • $e \mapsto -$
  • $p * q$ where $p$ and $q$ are precise
  • $p \wedge q$ where $p$ or $q$ is precise
  • $p$ where $p \Rightarrow q$ is valid and $q$ is precise

Proposition: When $q$ is precise, $$(p_0 * q) \wedge (p_1 * q) \Rightarrow (p_0 \wedge p_1) * q$$ is valid. When $q$ is precise and $x$ is not free in $q$, $$\forall x.\ (p * q) \Rightarrow (\forall x.\ p) * q$$ is valid.

4. Intuitionstic Assertions

An assertion $i$ is intuituionistic iff, for all stores $s$ and heaps $h$ and $h'$, $$(h \subseteq h^\prime \text{ and } s, h \vDash i) \text{ implies } s, h^\prime \vDash i.$$

Assume $i$ and $i^\prime$ are intuitionistic assertions, and $p$ is any assertion, then:

The following assertions are intuitionistic:

  • any pure assertion
  • $p * i$
  • $p \sepimp i$
  • $i \sepimp p$
  • $i \wedge i^\prime$
  • $i \vee i^\prime$
  • $\forall v.\ i$
  • $\exists v.\ i$

Special cases:

  • $p * \textbf{true}$
  • $\textbf{true} \sepimp p$
  • $e \hookrightarrow e^\prime$

The following inference rules are sound:

  • $(i * i^\prime) \Rightarrow (i \wedge i^\prime)$
  • $(i * p) \Rightarrow i$
  • $i \Rightarrow (p \sepimp i)$
  • $\dfrac{p \Rightarrow i}{(p * \textbf{true} \Rightarrow i)}$
  • $\dfrac{i \Rightarrow p}{i \Rightarrow (\textbf{true} \sepimp p)}$

Specifications and Inference Rules

Specifications

  • Partial correctness: $\lbrace p \rbrace \ c \ \lbrace q \rbrace$
  • Total correctness: $[p] \ c \ [q]$

Differences with Hoare Logic:

  • Specifications are universally quantified implicitly over both stores and heaps,
  • Specifications are universally quantified implicitly over all possible executions,
  • Any execution (from a state satisfying $p$) that gives a memory fault falsifies both partial and total specifications.

Hoare’s Inference Rules

  • Strengthening Precedent $${p \Rightarrow q \ \ \ \ \lbrace q \rbrace \ c \ \lbrace r \rbrace \over \lbrace p \rbrace \ c \ \lbrace r \rbrace}$$
  • Weakening Consequent $${\lbrace p \rbrace \ c \ \lbrace q \rbrace \ \ \ \ q \Rightarrow r \over \lbrace p \rbrace \ c \ \lbrace r \rbrace}$$

Rule of Constancy (unsound)

$${\lbrace p \rbrace \ c \ \lbrace q \rbrace \over \lbrace p \wedge r \rbrace \ c \ \lbrace q \wedge r \rbrace}$$ is unsound, e.g. $${\lbrace x \mapsto - \rbrace \ [x]:=4 \ \lbrace x \mapsto 4 \rbrace \over \lbrace x \mapsto - \wedge y \mapsto 3 \rbrace \ [x]:=4 \ \lbrace x \mapsto 4 \wedge y \mapsto 3 \rbrace}$$ does not hold when $x = y$.

The Frame Rule

$${\lbrace p \rbrace \ c \ \lbrace q \rbrace \over \lbrace p * r \rbrace \ c \ \lbrace q * r \rbrace}$$ where no variable occurring free in $r$ is modified by $c$.

Reasoning and specification is confined to the cells that the program actually accesses. The value of any other cell automatically remains unchanged.

Local reasoning:

  • The set of variables and heap cells that may actually be used by a command is called its footprint.
  • If $\lbrace p \rbrace \ c \ \lbrace q \rbrace$ is valid, then $p$ will assert that the heap contains all the cells in the footprint of $c$.
  • If $p$ asserts that the heap contains only cells in the footprint of $c$, then $\lbrace p \rbrace \ c \ \lbrace q \rbrace$ is a local specification.
  • If $c^\prime$ contains $c$, it may have a larger footprint described by $p * r$.

Soundness of the Frame Rule

Consider $\textbf{dispose }x$: $$\lbrace \textbf{emp} \rbrace \ \textbf{dispose }x \ \lbrace \textbf{emp} \rbrace$$

the frame rule would give: $$\lbrace \textbf{emp} * x \mapsto 10 \rbrace \ \textbf{dispose }x \ \lbrace \textbf{emp} * x \mapsto 10 \rbrace$$

and thus $$\lbrace x \mapsto 10 \rbrace \ \textbf{dispose }x \ \lbrace x \mapsto 10 \rbrace$$

which is patently false.

We define:

  • If, starting in the state $s, h$, no execution of a command $c$ aborts, then $c$ is safe at $s, h$.
  • If, starting in the state $s, h$, every execution of $c$ terminates without aborting, then $c$ must terminate normally at $s, h$.

Then the programming language satisfies Safety Monotonicity:

  • If $\hat h \subseteq h$ and $c$ is safe at $s, h - \hat h$, then $c$ is safe at $s, h$.
  • If $\hat h \subseteq h$ and $c$ must terminate normally at $s, h - \hat h$, then $c$ must terminate normally at $s, h$.

The programming language also satisfies the Frame Property:

  • If $\hat h \subseteq h$, $c$ is safe at $s, h - \hat h$, and some execution of $c$ starting at $s, h$ terminates normally in the state $s^\prime, h^\prime$, then $\hat h \subseteq h^\prime$ and some execution of $c$ starting at $s, h - \hat h$ terminates normally in the state $s^\prime, h^\prime - \hat h$.

Proposition: If the programming language satisfies safety monotonicity and the frame property, then the frame rule is sound for both partial and total correctness.

Locality = Safety Monotonicity + the Frame Property

Inference Rules for Mutation

The local form (MUL): $${\over \lbrace e \mapsto - \rbrace \ [e]:=e^\prime \ \lbrace e \mapsto e^\prime \rbrace}$$

The global form (MUG): $${\over \lbrace (e \mapsto -) * r \rbrace \ [e]:=e^\prime \ \lbrace (e \mapsto e^\prime) * r \rbrace}$$

  • MUG can be derived from MUL using the frame rule.
  • If we set $r$ to $\textbf{emp}$, MUG becomes MUL.

The backward-reasoning form (MUBR): $${\over \lbrace (e \mapsto -) * ((e \mapsto e^\prime) \sepimp p) \ [e]:=e^\prime \ \lbrace p \rbrace}$$

Inference Rules for Deallocation

The local form (DISL): $${\over \lbrace e \mapsto - \rbrace \ \textbf{dispose } e \ \lbrace \textbf{emp} \rbrace}$$

The global (and backward-reasoning) form (DISBR): $${\over \lbrace (e \mapsto -) * r \rbrace \ \textbf{dispose } e \ \lbrace r \rbrace}$$

Inference Rules for Nonoverwriting Allocation

Abbreviate $e_1, e_2, \dots, e_n$ by $\overline{e}$.

The local form (CONSNOL): $${\over \lbrace \textbf{emp} \rbrace \ v:=\textbf{cons}(\overline{e}) \ \lbrace v \mapsto \overline{e} \rbrace}$$ where $v \notin \text{FV}(\overline{e})$.

The global form (CONSNOG): $${\over \lbrace r \rbrace \ v:=\textbf{cons}(\overline{e}) \ \lbrace (v \mapsto \overline{e}) * r \rbrace}$$ where $v \notin \text{FV}(\overline{e}) \cup \text{FV}(r)$.

Inference Rules for General Allocation

The local form (CONSL): $${\over \lbrace v = v^\prime \wedge \textbf{emp} \rbrace \ v:= \textbf{cons}(\overline{e}) \ \lbrace v \mapsto \overline{e}\left[{v^\prime \over v}\right] \rbrace}$$ where $v^\prime$ is distinct from $v$.

The global form (CONSG): $${\over \lbrace r \rbrace \ v:=\textbf{cons}(\overline{e}) \ \lbrace \exists v^\prime.\ (v \mapsto \overline{e}\left[{v^\prime \over v}\right]) * r\left[{v^\prime \over v}\right] \rbrace}$$ where $v^\prime$ is distinct from $v$, $v^\prime \notin \text{FV}(\overline{e}) \cup \text{FV}(r)$.

The backward-reasoning form (CONSBR): $${\over \lbrace \forall v^{\prime\prime}.\ (v^{\prime\prime} \mapsto \overline{e}) \sepimp p\left[{v^{\prime\prime} \over v}\right] \rbrace \ v:=\textbf{cons}(\overline{e}) \ \lbrace p \rbrace}$$ where $v^{\prime\prime}$ is distinct from $v$, $v^{\prime\prime} \notin \text{FV}(\overline{e}) \cup \text{FV}(r)$.

CONSL cannot be derived from CONSG: for example, $$\lbrace \textbf{emp} \rbrace \ i:=\textbf{cons}(3,i) \ \lbrace \exists j.\ i \mapsto 3, j \rbrace$$ does not satisfy CONSL, because $j$ can be different from the original value of $i$.

Inference Rules for Nonoverwriting Lookup

The local form (LKNOL): $${\over \lbrace e \mapsto v^{\prime\prime} \rbrace \ v:=[e] \ \lbrace v = v^{\prime\prime} \wedge (e \mapsto v) \rbrace }$$ where $v \notin \text{FV}(e)$.

The global form (LKNOG): $${\over \lbrace \exists v^{\prime\prime}.\ (e \mapsto v^{\prime\prime}) * p\left[{v^{\prime\prime} \over v}\right] \rbrace \ v:=[e] \ \lbrace (e \mapsto v) * p \rbrace}$$ where $v \notin \text{FV}(e)$, $v^{\prime\prime} \notin \text{FV}(e) \cup (\text{FV}(p) - \lbrace v \rbrace)$.

Inference Rules for General Lookup

The local form (LKL): $${\over \lbrace v = v^\prime \wedge (e \mapsto v^{\prime\prime}) \rbrace \ v:=[e] \ \lbrace v = v^{\prime\prime} \wedge (e\left[{v^\prime \over v}\right] \mapsto v)\rbrace}$$ where $v$, $v^\prime$ and $v^{\prime\prime}$ are distinct.

The global form (LKG): $${\over \lbrace \exists v^{\prime\prime}.\ (e \mapsto v^{\prime\prime}) * r\left[{v \over v^\prime}\right] \rbrace \ v:=[e] \ \lbrace \exists v^\prime.\ (e\left[{v^\prime \over v}\right] \mapsto v) * r\left[{v \over v^{\prime\prime}}\right]\rbrace}$$ where $v$, $v^\prime$ and $v^{\prime\prime}$ are distinct, $v^\prime, v^{\prime\prime} \notin \text{FV}(e)$, $v \notin \text{FV}(r)$.

The backward-reasoning forms (LKBR1 and LKBR2): $${\over \lbrace \exists v^{\prime\prime}.\ (e \mapsto v^{\prime\prime}) * ((e \mapsto v^{\prime\prime}) \sepimp p\left[{v^{\prime\prime} \over v}\right]) \rbrace \ v:=[e] \ \lbrace p \rbrace}$$ where $v^{\prime\prime} \notin \text{FV}(e) \cup (\text{FV}(p) - \lbrace v \rbrace)$.

$${\over \lbrace \exists v^{\prime\prime}.\ (e \hookrightarrow v^{\prime\prime}) \wedge p\left[{v^{\prime\prime} \over v}\right] \rbrace \ v:=[e] \ \lbrace p \rbrace}$$ where $v^{\prime\prime} \notin \text{FV}(e) \cup (\text{FV}(p) - \lbrace v \rbrace)$.

Lists and List Segments

Notations:

  • $\alpha, \beta$ are sequences
  • $\epsilon$: the empty sequence
  • $[a]$: the single-element sequence containing $a$
  • $\alpha \cdot \beta$: the composition of $\alpha$ followed by $\beta$
  • $\alpha^\dag$: the reflection of $\alpha$
  • $\#\alpha$: the length of $\alpha$
  • $\alpha_i$: the $i$-th component of $\alpha$

Singly-linked Lists

$\text{list } \alpha\ i$ is defined as (by induction on the length of the sequence $\alpha$):

$$\text{list } \epsilon\ i \overset{\Delta}{=} \textbf{emp} \wedge (i = \textbf{nil})$$

$$\text{list } (a \cdot \alpha)\ i \overset{\Delta}{=} (\exists j.\ i \mapsto a, j) * (\text{list } \alpha\ j)$$

Singly-linked List Segments

$$\text{lseg}\ \epsilon\ (i,j) \overset{\Delta}{=} \textbf{emp} \wedge (i = j)$$

$$\text{lseg}\ (a\cdot\alpha) (i, k) \overset{\Delta}{=} (\exists j.\ i \mapsto a, j) * (\text{lseg}\ \alpha\ (j, k))$$

Emptyness Conditions

For lists: $$\text{list } \alpha\ i \Rightarrow (i = \textbf{nil} \Leftrightarrow \alpha = \epsilon)$$

For list segments: $$\text{lseg } \alpha \ (i,j) \Rightarrow (i = \textbf{nil} \Rightarrow (\alpha = \epsilon \wedge j = \textbf{nil}))$$ $$\text{lseg } \alpha \ (i,j) \Rightarrow (i \neq j \Rightarrow \alpha \neq \epsilon)$$

Nontouching List Segments

Consider $\text{lseg } (a_1 \cdots a_n) (i_0, i_n)$.

$i_n$ is not constrained and may equal to any of $i_0$ to $i_{n-1}$, in which case the list segment is **touching**.

Types of list segments:

  • nontouching
  • cyclic
    • touching
    • overlapping (forbidden by $*$)

Nontouching list segments can be defined inductively as $$\text{ntlseg } \epsilon\ (i,j) \overset{\Delta}{=} \textbf{emp} \wedge (i=j)$$ $$\text{ntlseg } (a \cdot \alpha)\ (i,k) \overset{\Delta}{=} (i \neq k) \wedge (i+1 \neq k) \wedge ((\exists j.\ i \mapsto a,j) * (\text{ntlseg } \alpha\ (j,k)))$$

Or equivalently: $$\text{ntlseg } \alpha\ (i,j) \overset{\Delta}{=} \text{lseg } \alpha (i, j) \wedge \neg(j \hookrightarrow -)$$

The advantage of a nontouching list segment: $$\text{ntlseg } \alpha\ (i,j) \Rightarrow (\alpha = \epsilon \Leftrightarrow i = j)$$

Preciseness of List Assertions

Precise assertions:

  • $\text{list } \alpha\ i$
  • $\text{lseg } \alpha\ (i, j)$
  • $\text{ntlseg } \alpha\ (i, j)$
  • $\exists \alpha.\ \text{list }\alpha\ i$
  • $\exists \alpha.\ \text{ntlseg }\alpha\ (i, j)$

Nonprecise assertion: $\exists \alpha.\ \text{lseg }\alpha\ (i,j)$

Doubly-linked List Segments

$$\text{dlseg } \epsilon\ (i, i^\prime, j, j^\prime) \overset{\Delta}{=} \textbf{emp} \wedge (i=j) \wedge (i^\prime = j^\prime)$$

$$\text{dlseg } (a \cdot \alpha)\ (i, i^\prime, k, k^\prime) \overset{\Delta}{=} (\exists j.\ i \mapsto a, j, i^\prime) * (\text{dlseg } \alpha\ (j, i, k, k^\prime))$$