The conjugate direction methods can be viewed as being intermediate between the method of steepest descent and Newton’s method. The conjugate direction methods have the following properties:
-
Solve quadratics of $n$ variables in $n$ steps.
-
The usual implementation, the conjugate gradient algorithm, requires no Hessian matrix evaluations.
-
No matrix inversion and no storage of an $n \times n $ matrix required.
In this chapter, we will talk about two conjugate direction methods: the conjugate direction algorithm and the conjugate gradient algorithm.
Let $\b{Q} $ be a real symmetric $n \times n$ matrix. The directions $\b{d}^{(0)}, \b{d}^{(1)}, \b{d}^{(2)}, …, \b{d}^{(m)} $ are $Q$-conjugate if, for all $i != j $, we have $\transpose{ d^{(i)}} \b{Q} \b{d}^{(j)} = 0 $.
Let $\b{Q} $ be a symmetric positive definite $n \times n $ matrix. If the directions $\b{d}^{0}, \b{d}^{(1)}, …, \b{d}^{(k)} \in \R^n, k \leq n - 1$, are nonzero and $Q$-conjugate, then they are linearly independent.
</span>
</span>
<span class="proof__expand"><a>[expand]</a></span>