COMID(Composite Objective Mlrror Descent)是一种能够保证L1正则化结构的在线算法,其随机收敛速率可由在线算法的regret界直接得到,但其最终解是T次迭代平均的形式,稀疏性很差.瞬时解具有很好的稀疏性,因此分析算法的瞬时收敛速率在随机学习中变得越来越重要.本文讨论正则化非光滑损失的随机优化问题,当正则化项为L1和L1+12时,分别证明了COMID的瞬时收敛速率.大规模数据库上的实验表明,在保证几乎相同正确率的同时,瞬时解一致地提高了稀疏性,尤其是对稀疏性较差的数据库,稀疏度甚至能够提升4倍以上.
COMID is an online algorithm which can ensure the structure of L1 regularization.Its stochastic convergence rate can be obtained directly from the regret bound in online settings. However, the derived final solution has poor sparsity because it on- ly takes the form of averaging all previous T iterates. Naturally, the individual solution has nice sparisity. So it becomes more and more important to discuss in~vidual convergence rates in the stochastic learning. In this paper, we focus on the regularized non- smooth loss problems. When the regularizer are L1 and L1 + L2, we prove the individual convergence rates of COMID respectively. The extensive experiments on large-scale datasets demonstrate that the individual solution consistently improves the sparsity while keeping almost the same accuracy.For the datasets with poor sparse structure,the sparsity of solution is improved even up to four times.