提出一类新的求解无约束优化问题的记忆梯度法。算法在每步迭代中利用当前和前面迭代点的信息产生下降方向,采用精确线性搜索或Wolfe非精确线性搜索产生步长,在较弱条件下证明了算法具有全局收敛性和线性收敛速率。数值试验表明算法是有效的。
A new memory gradient method for unconstrained optimization problems is presented.This method makes use of the current and previous iterative information to generate a decent direction and uses exact linear search or Wolfe inexact linear search to define the step-size at each iteration.The global convergence and linear convergence rate are proved under some mild conditions.Numerical experiments show that the new method is efficient in practical computation.