我们建议一个新奇进步框架优化深神经的网络。想法是试着联合线性方法和听说复杂、抽象的内部代表深学习方法的能力的稳定性。我们在输入层之间插入线性损失层和一个传统的深模型的首先隐藏的非线性的层。为优化的损失是增加的新层的线性损失和最后产量层的非线性的损失的加权的和。我们修改深正规的关联分析(DCCA ) 的模型结构,即,增加第三个语义看法调整文章和图象对并且把结构嵌进我们的框架,为象 text-to-image 那样的跨 modal 检索任务,搜索和 image-to-text 寻找。试验性的结果证明修改模型的性能比类似的 state-of-art 在新加坡的国家大学的数据集上来临的好(NUS 宽) 。验证我们的框架的归纳能力,我们把我们的框架用于 RankNet,一个评价模型由随机的坡度降下优化了。我们的方法超过 RankNet 并且更快速收敛,它显示我们的进步框架能为深神经的网络提供一个更好、更快的答案。
We propose a novel progressive framework to optimize deep neural networks. The idea is to try to combine the stability of linear methods and the ability of learning complex and abstract internal representations of deep leaming methods. We insert a linear loss layer between the input layer and the first hidden non-linear layer of a traditional deep model. The loss objective for optimization is a weighted sum of linear loss of the added new layer and non-linear loss of the last output layer. We modify the model structure of deep canonical correlation analysis (DCCA), i.e., adding a third semantic view to regularize text and image pairs and embedding the structure into our framework, for cross-modal retrieval tasks such as text-to-image search and image-to-text search. The experimental results show the performance of the modified model is better than similar state-of-art approaches on a dataset of National University of Singapore (NUS-WIDE). To validate the generalization ability of our framework, we apply our framework to RankNet, a ranking model optimized by stochastic gradient descent. Our method outperforms RankNet and converges more quickly, which indicates our progressive framework could provide a better and faster solution for deep neural networks.