问题求解Ⅳ

12 近似算法的基本概念

2019-05-20 21:11 CST
2019-06-06 23:55 CST
CC BY-NC 4.0

Section 4.2, Approximation Algorithms, Juraj Hromkovic, Springer.

4.2.1 Concept of Approximation Algorithms

Definition 4.2.1.1 Let $U = (\Sigma_I, \Sigma_O, L, L_I, \mathcal{M}, cost, goal)$ be an optimization problem, and let $A$ be a consisent algorithm $U$. For every $x \in L_I$, the relative error $\varepsilon_A(x)$ of $A$ on $x$ is defined as $$\varepsilon_A(x) = \dfrac{\vert cost(A(x)) - Opt_U(x) \vert}{Opt_U(x)}.$$ For any $n \in \mathbb{N}$, we define the relative error of $A$ as $$\varepsilon_A(n) = \max \{ \varepsilon_A(x) | x \in L_1 \cap (\Sigma_I)^n \}.$$ For every $x \in L_I$, the approximation ratio $R_A(x)$ of $A$ on $x$ is defined as $$R_A(x) = \max \left\lbrace \dfrac{cost(A(x))}{Opt_U(x)}, \dfrac{Opt_U(x)}{cost_A(x)} \right\rbrace.$$ For any $n \in \mathbb{N}$, we define the approximation ratio of $A$ as $$R_A(n) = \max \{ R_A(x) | x \in L_1 \cap (\Sigma_I)^n \}.$$

For any positive real $\delta > 1$, we say that $A$ is a $\delta$-approximation algorithm for $U$ if $R_A(x) \leqslant \delta$ for every $x \in L_I$.

For any function $f: \mathbb{N} \rightarrow \mathbb{R}^+$, we say that $A$ is an $f(n)$-approximation algorithm for $U$ if $R_A(n) \leqslant f(n)$ for every $n \in \mathbb{N}$.

We observe that $$R_A(x) = \dfrac{cost(A(x))}{Opt_U(x)} = 1 + \varepsilon_A(x),$$ if $U$ is a minimization problem, and that $$R_A(x) = \dfrac{Opt_U(x)}{cost(A(x))},$$ if $U$ is a maximization problem.

Definition 4.2.1.6 Let $U = (\Sigma_I, \Sigma_O, L, L_I, \mathcal{M}, cost, goal)$ be an optimization problem. An algorithm $A$ is called a polynomial-time approximation scheme (PTAS) of $U$, if, for every input pair $(x, \varepsilon) \in L_I \times \mathbb{R}^+$, $A$ computes a feasible solution $A(x)$ with a relative error at most $\varepsilon$, and $Time_A(x, \varepsilon^{-1})$ can be bounded by a function that is polynomial in $\vert x \vert$. If $Time_A(x, \varepsilon^{-1})$ can be bounded by a function that is polynomial in both $\vert x \vert$ and $\varepsilon^{-1}$, then we say that $A$ is a fully polynomial-time approximation scheme (FPTAS) for $U$.

In the typical case, one has to pay for the decrease in the relative error by an increaase in the time complexity. FPTASs are very convenient because time complexity does not grow too quickly with $\varepsilon^{-1}$.

4.2.2 Classification of Optimization Problems

  • NPO(I): Contains every optimization problem from NPO for which there exists a FPTAS.
  • NPO(II): Contains every optimization problem from NPO that has a PTAS.
  • NPO(III): Contains every optimization problem $U \in \text{NPO}$ such that
    1. there is a polynomial-time $\delta$-approximation algorithm for some $\delta > 1$, and
    2. there is no polynomial-time $d$-approximation algorithm for $U$ for some $d < \delta$, i.e., there is no PTAS for $U$.
  • NPO(IV): Contains every optimization problem $U \in \text{NPO}$ such that
    1. there is a polynomial-time $f(n)$-approximation algorithm for $U$ for some $f: \mathbb{N} \rightarrow \mathbb{R}^+$, where $f$ is bounded by a polylogarithmic function, and
    2. under some reasonable assumption like $\text{P} \neq \text{NP}$, there does not exist any polynomial-time $\delta$-approximation algorithm for $U$ for any $\delta \in \mathbb{R}^+$.
  • NPO(V): Contains every $U \in \text{NPO}$ such that if there exists a polynomial-time $f(n)$-approximation algorithm for $U$, then (under some reasonable assumption like $\text{P} \neq \text{NP}$) $f(n)$ is not bounded by any polylogarithmic function.

4.2.3 Stability of Approximation

Definition 4.2.3.1 Let $U = (\Sigma_I, \Sigma_O, L, L_I, \mathcal{M}, cost, goal)$ and $\bar{U} = (\Sigma_I, \Sigma_O, L, L, \mathcal{M}, cost, goal)$ be two optimization problems with $L_I \subset L$. A distance function for $\bar{U}$ according to $L_I$ is any function $h_L: L \rightarrow \mathbb{R}^{\geqslant 0}$ satisfying the properties

  • $h_L(x) = 0$ for every $x \in L_I$, and
  • $h$ is polynomial-time computable.

Let $h$ be a distance function for $\bar{U}$ according to $L_I$. We define, for any $r \in \mathbb{R}^+$, $$Ball_{r, h}(L_I) = \{ w \in L | h(w) \leqslant r \}.$$

Let $A$ be a consistent algorithm for $\bar{U}$, and let $A$ be an $\varepsilon$-approximation algorithm for $U$ for some $\varepsilon \in \mathbb{R}^{>1}$. Let $p$ be a positive real. We say that $A$ is $p$-stable according to $h$ if, for every real $0 < r \leqslant p$, there exists a $\delta_{r, \varepsilon} \in \mathbb{R}^{>1}$ such that $A$ is $\delta_{r, \varepsilon}$-approximation algorithm for $U_r = (\Sigma_I, \Sigma_O, L, Ball_{r, h}(L_I), \mathcal{M}, cost, goal)$.

$A$ is stable according to $h$ if $A$ is $p$-stable according to $h$ for every $p \in \mathbb{R}^+$. We say that $A$ is unstable according to $h$ if $A$ is not $p$-stable for any $p \in \mathbb{R}^+$.

For every positive integer $r$, and every function $f_r: \mathbb{N} \rightarrow \mathbb{R}^{>1}$ we say that $A$ is $(r, f_r(n))$-quasistable according to $h$ if $A$ is an $f_r(n)$-approximation algorithm for $U_r = (\Sigma_I, \Sigma_O, L, Ball_{r, h}(L_I), \mathcal{M}, cost, goal)$.

Definition 4.2.3.6 Let $U = (\Sigma_I, \Sigma_O, L, L_I, \mathcal{M}, cost, goal)$ and $\bar{U} = (\Sigma_I, \Sigma_O, L, L, \mathcal{M}, cost, goal)$ be two optimization problems with $L_I \subset L$. Let $h$ be a distance function for $\bar{U}$ according to $L_I$, and let $U_r = (\Sigma_I, \Sigma_O, L, Ball_{r, h}(L_I), \mathcal{M}, cost, goal)$ for every $r \in \mathbb{R}^+$. Let $A = \{A_{\varepsilon}\}_{\varepsilon > 0}$ be a PTAS for $U$.

If, for every $r > 0$ and every $\varepsilon > 0$, $A_\varepsilon$ is a $\delta_{r, \varepsilon}$-approximation algorithm for $U_r$, we say that the PTAS $A$ is satble according to $h$.

If $\delta_{r, \varepsilon} \leqslant f(\varepsilon) \cdot g(r)$, where

  • $f$ and $g$ are some functions from $\mathbb{R}^{\geqslant 0}$ to $\mathbb{R}^+$, and
  • $\lim_{\varepsilon \rightarrow 0} f(\varepsilon) = 0$,

then we say that the PTAS $A$ is superstable according to $h$.

4.2.4 Dual Approximation Algorithms

Definition 4.2.4.1 Let $U = (\Sigma_I, \Sigma_O, L, L_I, \mathcal{M}, cost, goal)$ be an optimization problem. A constraint distance function for $U$ is any function $h: L_I \times \Sigma_O^* \rightarrow \mathbb{R}^{\geqslant 0}$ such that

  • $h(x, S) = 0$ for every $S \in \mathcal{M}(x)$,
  • $h(x, S) > 0$ for every $S \notin \mathcal{M}(x)$, and
  • $h$ is polynomial-time computable.

For every $\varepsilon \in \mathbb{R}^+$, and every $x \in L_I$, $\mathcal{M}_\varepsilon^h(x) = \{ S \in \Sigma_O^* | h(x, S) \leqslant \varepsilon \}$ is the $\varepsilon$-ball of $\mathcal{M}(x)$ according to $h$.

Definition 4.2.4.2 Let $U = (\Sigma_I, \Sigma_O, L, L_I, \mathcal{M}, cost, goal)$ be an optimization problem, and let $h$ be a constraint distance function for $U$. An optimization algorithm $A$ for $U$ is called an $h$-dual $\varepsilon$-approximation algorithm for $U$, if for every $x \in L_I$,

  • $A(x) \in \mathcal{M}_\varepsilon^h(x)$, and
  • $cost(A(x)) \geqslant Opt_U(x)$ if $goal = \texttt{maximum}$, and $cost(A(x)) \leqslant Opt_U(x)$ if $goal = \texttt{minimum}$.

Definition 4.2.4.3 Let $U = (\Sigma_I, \Sigma_O, L, L_I, \mathcal{M}, cost, goal)$ be an optimization problem, and let $h$ be a constraint distance function for $U$. An optimization algorithm $A$ for $U$ is called an $h$-dual polynomial-time approximation scheme ($h$-dual PTAS for $U$), if

  • for every input $(x, \varepsilon) \in L_I \times \mathbb{R}^+$, $A(x, \varepsilon) \in \mathcal{M}_\varepsilon^h(x)$,
  • $cost(A(x)) \geqslant Opt_U(x)$ if $goal = \texttt{maximum}$, and $cost(A(x)) \leqslant Opt_U(x)$ if $goal = \texttt{minimum}$, and
  • $Time_A(x, \varepsilon^{-1})$ is bounded by a function that is polynomial in $\vert x \vert$.

If $Time_A(x, \varepsilon^{-1})$ can be bounded by function that is polynomial in both $\vert x \vert$ and $\varepsilon^{-1}$, then we say that $A$ is a $h$-dual fully polynomial-time approximation scheme ($h$-dual FPTAS) for $U$.