构造了一个以微分包含形式给出的神经网络模型来求解带有等式约束和不等式约束的非线性最优化问题.通过在网络模型中引人含有加权矩阵的髙阶补偿项,不仅提髙了神经网络优化计算的收敛速度,而且改进了优化解从不可行域逐步收敛到稳定域的问题.理论上不仅证明了神经网络的解的全局存在性和唯一性,也证明了解的有界性以及在有限的时间内收敛到最优化问题所确定的最优解集中,并分析了神经网络的全局吸引性.通过三个数值例子验证了所提出的神经网络优化的有效性.
A recurrent neural network in the form of differential inclusion was proposed for solving a class of nonlinear optimization problems, where the constraints were defined by a class of inequality and equality constraints. A higher-order compensation term was involved in the considered neural model, therefore, the convergence rate of the neural computation was significantly increased and the unstable problem of the optimal solution from infeasible domain to feasible domain was solved. In theory, it is proven that not only the solution of the proposed network exists globally and uniquely, but also the solution of the proposed network is bounded and is convergent to the optimal solution set of the optimization problem. Meanwhile, global attractivity of the neural network was analyzed. Three numerical examples were used to show the effectiveness and good performance of the proposed neural network.