6. Basic Lagrangian Duality and Saddle Point

Basic Lagrangian duality and saddle point optimality conditions

Primal problem <——-> Lagrangian dual problem

6.1 Lagrangian dual problem

  • Primal problem:
    $$
    \begin{array}{l}
    {\text{(P)}} &
    \begin{array} {l}
    {\text{min}} & f(\mathbb x) \
    \text{s.t.} & g_i(\mathbb x) = 0 \
    \quad & h_j(\mathbb x) \leq 0\
    \quad & \mathbb x \in X
    \end{array}
    \end{array}
    $$
    where $x \subset \mathbb R^n$.

6.1.1 Lagrangian dual problem

  • Lagrangian dual function:
    $$
    \theta(\lambda, \mu) = \inf \left{ f(\mathbb x) + \sum\limits_{i=1}^m \lambda_i g_i(\mathbb x) + \sum\limits_{j=1}^p \mu_j h_j(\mathbb x) \ : \ \mathbb x \in X \right}
    $$
    (unconstrained problem), $\mu>0$.

    Then, we will have
    $$
    \theta(\lambda, \mu) \leq f(\mathbb x^*) + \sum\limits_{j=1}^p \mu_j h_j(\mathbb x) \leq f(\mathbb x^*) \quad \theta(\lambda, \mu) \text{ is lower bound of } f(\mathbb x^*)
    $$
    Find the greatest lower bound.

  • Lagrangian Dual Problem:
    $$
    (\text{D}) \quad \max \ \theta(\lambda, \mu) = \inf \left{ f(\mathbb x) + \lambda^T g_i(\mathbb x) + \mu^T h_j(\mathbb x) \ : \ \mathbb x \in X \right} \
    \text{s.t. }\quad \lambda \in \mathbb R^m,\quad \mu\in R_+^p.
    $$
    $\theta(\lambda, \mu)$ is concave function.

Convex programming, for D or P, if one has optimal solution, the other will also have.

Nonconvex programming, if one has optimal solution, the other may not have.

If convexity + suitable constraints, D and P may have equal optimal solution.

6.1.2 Weak Duality

Optimal primal (minimization) objective value ≥ Optimal dual (maximization) objective value
$$
\min { f(\mathbb x) : \mathbb x \in S} \geq \max { \theta(\lambda. \mu):\lambda \in \mathbb R^m, \mu \geq 0 }
$$
If $f(\mathbb x^*) = \theta (\lambda^*, \mu ^*)$, then $x^*$ and $(\lambda^*, \mu^*)$ are optimal solution.

Duality Gap: $\min { f(\mathbb x) : \mathbb x \in S} - \max { \theta(\lambda. \mu):\lambda \in \mathbb R^m, \mu \geq 0 }$

6.1.3 Strong Duality

$X$ : convex set. $f, h$ : convex functions. $g$ : affine functions,
$$
\inf \ { f(x): g(x)=0, h(x)\leq 0, x\in X } = \sup \ { \theta(\lambda, \mu):\mu \leq 0, \lambda \in \mathbb R^m }
$$

6.2 Saddle Point Optimality & KKT

$(\mathbb x^*, \lambda^*, \mu^*)$ is saddle point of Lagrangian function $L(\mathbb x, \lambda, \mu) = f(\mathbb x) +\lambda ^T g(\mathbb x) + \mu^T h(\mathbb x)$, if $\mathbb x^* \in X, \mu \geq 0$ and
$$
L(\mathbb x^*, \lambda, \mu) \leq L(\mathbb x^*, \lambda^*, \mu^*) \leq L(\mathbb x, \lambda^*, \mu ^*)
$$
Remark

  1. For a fixed $\mathbb x^∗$, $(λ^∗, μ^∗)$ maximizes $L(\mathbb x^∗, λ, μ)$ over all $(λ, μ)$ with $μ ≥ 0$.
  2. For a fixed $(λ^∗, μ^∗)$ , $\mathbb{x}^*$ minimizes $L(\mathbb x, λ^∗, μ^∗)$ over all $x ∈ X$.

$\mathbb x^*$ is optimal solution of (P) and $(\lambda^*, \mu^*)$ is optimal solution of (D). $(\mathbb x^*, \lambda^*,\mu^*)$ satisfies KKT conditions.

i.e., saddle point $(\mathbb x^, \lambda^,\mu^*)$ $\Rightarrow$ KKT condition**. but inverse is not true in general.

Convex Programming (CP):

KKT point of CP $\Rightarrow$ a saddle point.

$(\mathbb x^*, \lambda^*,\mu^*)$ is KKT point for CP, $x^*$ and $(\lambda^*, \mu^*)$ are optimal solution of P and D.