The subroutine uses the gradient and the Hessian matrix , and it requires continuous first- and second-order derivatives of the objective function inside the feasible region. If second-order ...
These techniques work well for medium to moderately large optimization problems where the objective function and the gradient are much faster to compute than the Hessian matrix. The NLPQN subroutine ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results