In the optimization problem
$$\begin{align*} \text{minimize } &f(\b{x}) \br \text{subject to } & \b{x} \in \Omega \end{align*}$$
The function $f:\R^n\to\R$ that we wish to minimize is a real-valued function, and is called the objective function, or cost function.
The vector $\b{x} $ is an $n$-vector of independent variables, that is, $\b{x} = \transpose{(x_1, x_2, …, x_{ n })} \in \R^n$. This vector is called the minimizer of $f $ over $\Omega $.
The variables $x_1, x_2, …, x_{ n } $ are often referred to as decision variables.
The set $\Omega$ is a subset of $\R^n$ called the constraint set of feasible set.
If $\Omega = \R^n$, then the problem is referred to as an unconstrained optimization problem.
Otherwise, the problem is referred to as an constrained optimization problem.
$f: \R^n \to \R , \Omega \subseteq \R^n $.
A point $\b{x}^* \in \Omega$ is a local minimizer of $f $ over $\Omega $ if
$\exists \epsilon > 0 $ such that $\forall \b{x} \in \Omega \setminus \set{ \b{x}^* }, \norm{ \b{x} - \b{x}^* } < \epsilon$ implies $f(\b{x}) \geq f(\b{x}^*)$.
In particular, if
$\norm{ \b{x} - \b{x}^* } < \epsilon$ implies $f(\b{x}) > f(\b{x}^*)$.
The point $\b{x}^* $ is a strict local minimizer.
A point $\b{x}^* \in \Omega $ is a global minimizer of $f $ over $\Omega $ if
$$\forall \b{x} \in \Omega \setminus \set{ \b{x}^* }, f(\b{x}) \geq f(\b{x}^*) .$$
In particular, if
$$f(\b{x}) > f(\b{x}^*).$$
$\b{x}^*$ is a strict global minimizer.