Newton’s method, despite having a quadratic order of convergence, does not guarantee convergence to a solution for a general nonlinear objective function from an arbitrary initial point $\b{x}^{(0)} $.
In general, if the initial point is not sufficiently close to the solution, then the algorithm may not possess the descent property (i.e. $f(\b{x}^{(k+1)}) \not < f(\b{x}^{(k)})$ for some $k$).
To deal with this problem and also avoid the computation of $\inv{ \b{F}(\b{x}^{(k)})} $, the quasi-Newton methods use an approximation to $\inv{ F(\b{x}^{(k)})} $ in place of the true inverse. This approximation is updated at every stage so that it exhibits at least some properties of $\inv{ F(\b{x}^{(k)})} $.
Observe an approximation to $\inv{ \b{F}(\b{x}^{(k)})} $, denoted as $\b{H}_k$.
$$\b{x}^{(k+1)} = \b{x}^{(k)} - \alpha \b{H}_k \b{g}^{(k)} $$
where $\b{H}_k$ is an $n \times n $ real matrix, and $\alpha $ is a positive search parameter.
Expanding $f $ about $\b{x}^{(k)} $ yields
$$\begin{align*} f(\b{x}^{(k+1)}) &= f(\b{x}^{(k)}) + \transpose{ \b{g}^{(k)}}(\b{x}^{(k+1)} - \b{x}^{(k)}) + o(\norm{ \b{x}^{(k+1)} - \b{x}^{(k)}}) \br &= f(\b{x}^{(k)}) - \alpha \transpose{ \b{g}^{(k)}} \b{H}_k \b{g}^{(k)} + o(\norm{ \b{H}_k \b{g}^{(k)}} \alpha) \end{align*}$$
As $\alpha $ tends to zero, the second term on the right-hand side of the above equation dominates the third. Thus, to guarantee a decrease in $f $ for small $\alpha $, we have to have
$$\transpose{ g^{(k)}} \b{H}_k \b{g}^{(k)} > 0 $$
A simple way to ensure this is to require that $\b{H}_k $ to be positive definite.
In short,
Let $ f \in \mathcal{ C }^1, \b{x}^{(k)} \in \R^n, \b{g}^{(k)} = \nabla f(\b{x}^{(k)}) \neq \b{0} $, and $\b{H}_k $ an $ n \times n $ real symmetric positive definite matrix. If we set $\b{x}^{(k+1)} = \b{x}^{(k)} - \alpha_k \b{H}_k \b{g}^{(k)}$, where $\alpha_k = \argmin{ \alpha \geq 0 } f(\b{x}^{(k)} - \alpha \b{H}_k \b{g}^{(k)})$, then $\alpha_k > 0 $, and $ f(\b{x}^{(k+1)}) < f(\b{x}^{(k)}). $