我们在场一个新歧管的学习算法叫了保存排列(LOPA ) 的本地 Orthogonality。我们的算法被试图用仿射的转变排列多重本地邻居进一个全球坐标系统的本地正切空间排列(LTSA ) 方法启发。然而, LTSA 经常没能保存象距离和角度那样的原来的几何数量。尽管为保存 orthogonality 的一个反复的排列过程被 LTSA 的作者建议,既不相应初始化也不实验被给。普罗克拉斯提斯 Subspaces 排列(PSA ) 实现由与退火模仿独立估计每旋转转变保存想法的 orthogonality。然而,在 PSA 的优化是复杂、多重的分开的本地旋转可以生产全球性倾向於矛盾的结果。探讨这些困难,我们首先使用 LTSA 的伪逆诡计与统一全球坐标代表每本地直角的转变。第二, orthogonality 限制被放松是半明确的编程(SDP ) 的一个例子。最后一个二拍子的圆舞反复的过程被采用进一步在直角的限制减少错误。广泛的实验证明 LOPA 能忠实地保存原来的数据集的距离,角度,内部产品,和邻居。在比较,当 LOPA 的运行时刻比 PSA, MVU 和 MVE 的显著地快时, LOPA 的嵌入的表演比 PSA 的好、比得上象 MVU 和 MVE 一样的最先进的算法的。
We present a new manifold learning algorithm called Local Orthogonality Preserving Alignment (LOPA). Our algorithm is inspired by the Local Tangent Space Alignment (LTSA) method that aims to align multiple local neighborhoods into a global coordinate system using affine transformations. However, LTSA often fails to preserve original geometric quantities such as distances and angles. Although an iterative alignment procedure for preserving orthogonality was suggested by the authors of LTSA, neither the corresponding initialization nor the experiments were given. Procrustes Subspaces Alignment (PSA) implements the orthogonality preserving idea by estimating each rotation transformation separately with simulated annealing. However, the optimization in PSA is complicated and multiple separated local rotations may produce globally contradictive results. To address these difficulties, we first use the pseudo-inverse trick of LTSA to represent each local orthogonal transformation with the unified global coordinates. Second the orthogonality constraints are relaxed to be an instance of semi-definite programming (SDP). Finally a two-step iterative procedure is employed to further reduce the errors in orthogonal constraints. Extensive experiments products, and neighborhoods of the original datasets. In that of PSA and comparable to that of state-of-the-art significantly faster than that of PSA, MVU and MVE. show that LOPA can faithfully preserve distances, angles, inner comparison, the embedding performance of LOPA is better than algorithms like MVU and MVE, while the runtime of LOPA is