文章提出了一个新的超记忆梯度法解决无约束优化问题.该算法沿着目标函数的下降方向进行搜索,每步迭代提出的算法都充分地利用了前面多步迭代信息,避免目标函数海瑟阵的储存和计算,因此它适合解决大规模无约束优化问题.在适当的假设条件下,证明了所提出的算法具有全局收敛性.数值实验表明此算法的可行性。
In this paper, we propose a new supermemory gradient method for unconstrained optimization. An attractive property of the proposed method is that the direction generated by the method is always a descent for the objective function. The proposed method sufficiently uses the previous multi-step iterative information at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions. Therefore, it is suitable to solve large-scale unconstrained optimization problems. Under appropriate conditions, we show that the new supermemory gradient method with nonmonotone line search is globally convergent. Numerical experiments show that the proposed algorithm is effective.