问题求解4-09 代数编码

分类:Problem Solving, 发布于:2019-04-20 15:24:56, 更新于:2019-04-27 23:27:39。 评论

TJ-AATA Chapt. 8 Algebraic Coding Theory

8.1 Error-Detecting and Correcting Codes

Typically, messages consist of binary $m$-tuples. Theses messages are encoded into codewords, consisting of binary $n$-tuples, by a device called an encoder. The message is transmitted and then decoded.

An error occurs if there is a change in one or more bits in the codeword. A decoding scheme is a method that either converts an arbitarily received $n$-tuple into a meaningful decoded message or gives an error message for that $n$-tuple.

奇偶校验码

DC2019-6

Maximum-Likelihood Decoding

We will assume that transmission errors are rare, and, that when they do occur, they occur independently in each bit; that is, if $p$ is the probability of an error in one bit and $q$ is the probability of an error in a different bit, then the probability of errors occurring in both of these bits at the same time is $pq$. We will also assume that a received $n$-tuple is decoded into a codeword that is closest to it; that is, we assume that the receiver uses maximum-likelihood decoding.

A binary-symmetric channel is a model that consists of a transmitter capable of sending a binary signal, either a 0 or a 1, together with a receiver.

Theorem 8.7 If a binary $n$-tuple $(x_1, \dots, x_n)$ is transmitted across a binary symmetric channel with probability $p$ that no error will occur in each coordinate, then the probability that there are errors in exactly $k$ coordinates is $$\binom{n}{k} q^k p^{n - k}$$

Block Codes

A code is an $(n, m)$-block code if the information that is to be coded can be divided into blocks of $m$ binary digits, each of which can be encoded into $n$ binary digits. More specifically, an $(n, m)$-block code consists of an encoding function $$E: \mathbb{Z}_2^m \rightarrow \mathbb{Z}_2^n$$ and a decoding function $$D: \mathbb{Z}_2^n \rightarrow \mathbb{Z}_2^m.$$

A codeword is any element in the image of $E$. We also require that $E$ be one-to-one so that two information blocks will not be encoded into the same codeword. If our code is to be error-correcting, then $D$ must be onto.

Let $\boldsymbol{x} = (x_1, \dots, x_m)$ and $\boldsymbol{y} = (y_1, \dots, y_n)$ be binary $n$-tuples. The Hamming distance or distance, $d(\boldsymbol{x}, \boldsymbol{y})$ is the number of bits in which $\boldsymbol{x}$ and $\boldsymbol{y}$ differ. The distance between two codewords is the minimum number of transmission errors required to change one codeword into the other. The minimum distance for a code, $d_{\min}$, is the minimum of all distances $d(\boldsymbol{x}, \boldsymbol{y})$, where $\boldsymbol{x}$ and $\boldsymbol{y}$ are distinct codewords. The weight, $w(\boldsymbol{x})$ of a binary codeword $\boldsymbol{x}$ is the number of 1s in $\boldsymbol{x}$. Clearly, $w(\boldsymbol{x}) = d(\boldsymbol{x}, \boldsymbol{0})$.

Proposition 8.11 Let $\boldsymbol{x}$, $\boldsymbol{y}$ and $\boldsymbol{z}$ be binary $n$-tuples. Then

  1. $w(\boldsymbol{x}) = d(\boldsymbol{x}, \boldsymbol{0})$;
  2. $d(\boldsymbol{x}, \boldsymbol{y}) \geqslant 0$;
  3. $d(\boldsymbol{x}, \boldsymbol{y}) = 0$ exactly when $\boldsymbol{x} = \boldsymbol{y}$;
  4. $d(\boldsymbol{x}, \boldsymbol{y}) = d(\boldsymbol{y}, \boldsymbol{x})$;
  5. $d(\boldsymbol{x}, \boldsymbol{y}) \leqslant d(\boldsymbol{x}, \boldsymbol{z}) + d(\boldsymbol{z}, \boldsymbol{y})$.

Theorem 8.13 Let $C$ be a code with $d_{\min} = 2n + 1$. Then $C$ can correct any $n$ or fewer errors. Futhermore, any $2n$ or fewer errors can be detected in $C$.

8.2 Linear Codes

A group code is a code that is also a subgroup of $\mathbb{Z}_2^n$.

Lemma 8.17 Let $\boldsymbol{x}$ and $\boldsymbol{y}$ be binary $n$-tuples. Then $w(\boldsymbol{x} + \boldsymbol{y}) = d(\boldsymbol{x}, \boldsymbol{y})$.

Theorem 8.18 Let $d_{\min}$ be the minimum distance for a group code $C$. Then $d_{\min}$ is the minimum of all the nonzero weights of the nonzero codewords in $C$. That is, $$d_{\min} = \min \left\lbrace w(\boldsymbol{x}): \boldsymbol{x} \neq \boldsymbol{0} \right\rbrace .$$

Linear Codes

Define the inner product of two binary $n$-tuples to be $$\boldsymbol{x} \cdot \boldsymbol{y} = x_1y_1 + \cdots + x_ny_n.$$

Let $\mathbb{M}_{m \times n}(\mathbb{Z}_2)$ denote the set of all $m \times n$ matrices with entries in $\mathbb{Z}_2$. Define the null space of a matrix $H \in \mathbb{M}_{m \times n}(\mathbb{Z}_2)$ to be the set of all binary $n$-tuples $\boldsymbol{x}$ such that $H\boldsymbol{x} = \boldsymbol{0}$. We denote the numm space of a matrix $H$ by $\text{Null}(H)$.

Theorem 8.21 Let $H$ be in $\mathbb{M}_{m \times n}(\mathbb{Z}_2)$. Then the null space of $H$ is a group code.

A code is a linear code if it is determined by the null space of some matrix $H \in \mathbb{M}_{m \times n}(\mathbb{Z}_2)$.

8.3 Parity-Check and Generator Matrices

Suppose that $H$ is an $m \times n$ matrix with entries in $\mathbb{Z}_2$ and $n > m$. If the last $m$ columns of the matrix form the $m \times m$ identity matrix $I_m$, then the matrix is a canonical parity-check matrix. More specifically, $H = (A | I_m)$. With each canonical parity-check matrix we can associate and $n \times (n - m)$ standard generator matrix $$G = \left( \dfrac{I_{n-m}}{A} \right).$$

An $\boldsymbol{x}$ satisfying $G\boldsymbol{x} = \boldsymbol{y}$ exists if and only if $H\boldsymbol{y} = 0$. Given a message block $\boldsymbol{x}$ to be encoded, the matrix $G$ will allow us to quickly encode it into a linear codeword $\boldsymbol{y}$.

Theorem 8.25 If $H \in \mathbb{M}_{m \times n}(\mathbb{Z}_2)$ is a canonical parity-check matrix, then $\text{Null}(H)$ consists of all $\boldsymbol{x} \in \mathbb{Z}_2^n$ whose first $n-m$ bits are arbitrary but whost last $m$ bits are determined by $H\boldsymbol{x} = \boldsymbol{0}$. Each of the last $m$ bits serves as an even parity check bit for some of the first $n-m$ bits. Hence, $H$ gives rise to an $(n, n-m)$-block code.

In light of the theorem, the first $n-m$ bits in $\boldsymbol{x}$ are called information bits and the last $m$ bits are called check bits.

Theorem 8.26 Suppose that $G$ is an $n \times k$ standard generator matrix. Then $C = \left\lbrace \boldsymbol{y}: G\boldsymbol{x} = \boldsymbol{y} \text{ for } \boldsymbol{x} \in \mathbb{Z}_2^k \right\rbrace$ is an $(n, k)$-block code. More specifically, $C$ is a group code.

Lemma 8.27 Let $H = (A | I_m)$ be an $m \times n$ canonical parity-check matrix and $G = \left( \dfrac{I_{n-m}}{A} \right)$ be the corresponding $n \times (n - m)$ standard generator matrix. Then $HG = \boldsymbol{0}$.

Theorem 8.28 Let $H = (A | I_m)$ be an $m \times n$ canonical parity-check matrix and let $G = \left( \dfrac{I_{n-m}}{A} \right)$ be the $n \times (n - m)$ standard generator matrix associated with $H$. Let $C$ be the code generated by $G$. Then $\boldsymbol{y}$ is in $C$ if and only if $H\boldsymbol{y} = \boldsymbol{0}$. In particular, $C$ is a linear code with canonical parity-check matrix $H$.

Proposition 8.30 Let $\boldsymbol{e}_i$ be the binary $n$-tuple with a 1 in the $i$th coordinate and 0's elsewhere and suppose that $H \in \mathbb{M}_{m \times n}(\mathbb{Z}_2)$. Then $H\boldsymbol{e}_i$ is the $i$th column of the matrix $H$.

Theorem 8.31 Let $H$ be an $m \times n$ binary matrix. Then the null space of $H$ is a single error-detecting code if and only if no column of $H$ consists entirely of zeros.

Theorem 8.34 Let $H$ be a binary matrix. The null space of $H$ is a single error-correcting code if and only if $H$ does not contain any zero columns and no two columns of $H$ are identical.

8.4 Efficient Decoding

If $H$ is an $m \times n$ matrix and $\boldsymbol{x} \in \mathbb{Z}_2^n$, then we say that the syndrome of $\boldsymbol{x}$ is $H\boldsymbol{x}$.

Proposition 8.36 Let the $m \times n$ binary matrix $H$ determine a linear code and let $\boldsymbol{x}$ be the received $n$-tuple. Write $\boldsymbol{x}$ as $\boldsymbol{x} = \boldsymbol{c} + \boldsymbol{e}$, where $\boldsymbol{c}$ is the transmitted codeword and $\boldsymbol{e}$ is the transmission error. Then the syndrome $H\boldsymbol{x}$ of the received codeword is also the syndrome of the error $\boldsymbol{e}$.

Theorem 8.37 Let $H \in \mathbb{M}_{m \times n}(\mathbb{Z}_2)$ and suppose that the linear code corresponding to $H$ is single error-correcting. Let $\boldsymbol{r}$ be a received $n$-tuple that was transmitted with at most one error. If the syndrome of $\boldsymbol{r}$ is $\boldsymbol{0}$, then no error has occurred; otherwise, if the syndrome of $\boldsymbol{r}$ is equal to some column of $H$, say the $i$th column, then the error has occurred in the $i$th bit.

Coset Decoding

Coset or standard decoding uses the cosets of $C$ in $\mathbb{Z}_2^n$ to implement maximum-likelihood decoding. Suppose that $C$ is an $(n, m)$-linear code. A coset of $C$ in $\mathbb{Z}_2^n$ is written in the form $\boldsymbol{x} + C$, where $\boldsymbol{x} \in \mathbb{Z}_2^n$. By Lagrange's Theorem, there are $2^{n-m}$ distince cosets of $C$ in $\mathbb{Z}_2^n$.

Suppose that $\boldsymbol{x}$ was the original codeword sent and that $\boldsymbol{r}$ is the $n$-tuple received. If $\boldsymbol{e}$ is the transmission error, then $\boldsymbol{r} = \boldsymbol{e} + \boldsymbol{x}$. However, this is exactly the statement that $\boldsymbol{r}$ is an element in the coset $\boldsymbol{e} + C$. In maximum-likelihood decoding we expect the error $\boldsymbol{e}$ to be as small as possible; that is, $\boldsymbol{e}$ will have the least weight. An $n$-tuple of least weight in a coset is called a coset leader. Once we have determined a coset leader for rach coset, the decoding process becomes a task of calculating $\boldsymbol{r} + \boldsymbol{e}$ to obtain $\boldsymbol{x}$.

We can make a table that designates a coset leader corresponding to each syndrome. Such a list is called a decoding table.

Proposition 8.43 Let $C$ be an $(n, k)$-linear code given by the matrix $H$ and suppose that $\boldsymbol{x}$ and $\boldsymbol{y}$ are in $\mathbb{Z}_2^n$. Then $\boldsymbol{x}$ and $\boldsymbol{y}$ are in the same coset of $C$ is and only if $H\boldsymbol{x} = H\boldsymbol{y}$. That is, two $n$-tuples are in the same coset if and only if their syndromes are the same.

评论