根据最速下降算法、拟牛顿法、FR共轭梯度法、PRP共轭梯度法等,求解大规模无约束优化问题的有效算法、精确线搜索与Wolfe线搜索等的搜索条件,着重对计算更为有效的适合求解无约束优化问题的记忆梯度算法进行研究。基于Wolfe非精确线搜索提出一种新的步长搜索方法,对记忆梯度算法进行改进。最后证明改进的算法在较弱的条件下是全局收敛的。
The unconstrained optimization problem is first studied in this paper. The steepest descent method, the Quasi Newton method, the FR conjugate gradient method, and the PRP conjugate gradient method, which are for the large scale unconstrained optimization problems, are then described with a focus on the discussion of the exact line search and the Wolfe line search. The memory gradient method, which is one of the efficient methods for solving unconstrained optimization problems is stressed. A new step-size search method based on the Wolfe line search is proposed and used to improve the memory gradient method algorithm. The improved algorithm is at last proved global convergent under weaker conditions.