一種基于非負(fù)低秩稀疏圖的半監(jiān)督學(xué)習(xí)改進(jìn)算法
doi: 10.11999/JEIT160559 cstr: 32379.14.JEIT160559
基金項(xiàng)目:
國家自然科學(xué)基金(61473154)
Improved Algorithm Based on Non-negative Low Rank and Sparse Graph for Semi-supervised Learning
Funds:
The National Natural Science Foundation of China (61473154)
-
摘要: 該文針對基于非負(fù)低秩稀疏圖的半監(jiān)督學(xué)習(xí)算法不能準(zhǔn)確地描述數(shù)據(jù)結(jié)構(gòu)的問題,提出一種融合平滑低秩表示和加權(quán)稀疏約束的改進(jìn)算法。該算法分別對經(jīng)典算法的低秩項(xiàng)和稀疏項(xiàng)進(jìn)行改進(jìn),準(zhǔn)確地捕獲了數(shù)據(jù)的全局子空間結(jié)構(gòu)和局部線性結(jié)構(gòu)。在構(gòu)建目標(biāo)函數(shù)時(shí),使用對數(shù)行列式函數(shù)代替核范數(shù)平滑地估計(jì)秩函數(shù),同時(shí)利用形狀交互信息和有標(biāo)簽樣本的類別信息構(gòu)造加權(quán)稀疏約束正則項(xiàng)。然后通過帶有自適應(yīng)懲罰的線性交替方向方法求解目標(biāo)函數(shù)并采用有效的后處理方法重構(gòu)數(shù)據(jù)的圖結(jié)構(gòu),最后利用基于局部和全局一致性的半監(jiān)督分類框架完成學(xué)習(xí)任務(wù)。在ORL庫,Extended Yale B庫和USPS庫上的實(shí)驗(yàn)結(jié)果表明,該改進(jìn)算法提高了半監(jiān)督學(xué)習(xí)的準(zhǔn)確率。
-
關(guān)鍵詞:
- 半監(jiān)督學(xué)習(xí) /
- 圖模型 /
- 低秩表示 /
- 稀疏約束
Abstract: Semi-supervised learning algorithm based on non-negative low rank and sparse graph can not describe the structures of the data exactly. Therefore, an improved algorithm which integrates smoothed low rank representation and weighted sparsity constraint is proposed. The low rank term and sparse term of the classical algorithm are improved by this algorithm respectively, and the global subspace structure and the locally linear structure can be captured exactly. When building the objective function, the logarithm determinant function instead of the nuclear norm is used to approximate the rank function smoothly. Meanwhile, the shape interaction information and the label information of labeled samples is used to build the weighted sparsity constraint regularization term. Then, the objective function is solved by a linearized alternating direction method with adaptive penalty and the graph construction is restructured by an available post-processing method. Finally, a semi-supervised classification framework based on local and global consistency is used to finish the learning task. The experimental results on ORL, Extended Yale B and USPS database show that the improved algorithm improves the accuracy of semi-supervised learning.-
Key words:
- Semi-supervised learning /
- Graph model /
- Low rank representation /
- Sparsity constraint
-
劉建偉, 劉媛, 羅雄麟. 半監(jiān)督學(xué)習(xí)方法[J]. 計(jì)算機(jī)學(xué)報(bào), 2015, 38(8): 1592-1617. doi: 10.11897/SP.J.1016.2015.01592. LIU Jianwei, LIU Yuan, and LUO Xionglin. Semi-supervised learning methods[J]. Chinese Journal of Computers, 2015, 38(8): 1592-1617. doi: 10.11897/SP.J.1016.2015.01592. ZHU X. Semi-supervised learning literature survey[R]. Madison: University of Wisconsin, 2006. LIU G C, LIN Z C, YAN S C, et al. Robust recovery of subspace structures by low-rank representation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(1): 171-184. doi: 10.1109/TPAMI.2012.88. ZHUANG L S, GAO H Y, LIN Z C, et al. Non-negative low rank and sparse graph for semi-supervised learning[C]. Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Providence, USA, 2012: 2328-2335. doi: 10.1109/CVPR.2012.6247944. ZHENG Y G, ZHANG X G, YANG S Y, et al. Low-rank representation with local constraint for graph construction[J]. Neurocomputing, 2013, 122: 398-405. doi: 10.1016/j.neucom. 2013.06.013. TANG K W, LIU R S, SU Z X, et al. Structure-constrained low-rank representation[J]. IEEE Transactions on Neural Networks and Learning Systems, 2014, 25(12): 2167-2179. doi: 10.1109/TNNLS.2014.2306063. PENG Y, LU B L, and WANG S H. Enhanced low-rank representation via sparse manifold adaption for semi-supervised learning[J]. Neural Networks, 2015, 65: 1-17. doi: 10.1016/j.neunet.2015.01.001. MOHAN K and FAZEL M. Iterative reweighted algorithms for matrix rank minimization[J]. The Journal of Machine Learning Research, 2012, 13(1): 3441-3473. 王斯琪, 馮象初, 張瑞, 等. 基于最大范數(shù)的低秩稀疏分解模型[J]. 電子與信息學(xué)報(bào), 2015, 37(11): 2601-2607. doi: 10. 11999/JEIT150468. WANG Siqi, FENG Xiangchu, ZHANG Rui, et al. Low-rank sparse decomposition model based on max-norm[J]. Journal of Electronics Information Technology, 2015, 37(11): 2601-2607. doi: 10.11999/JEIT150468. FAZEL M, HINDI H, and BOYD S. Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices[C]. Proceedings of the American Control Conference, Denver, USA, 2003, 3: 2156-2162. doi: 10.1109/ACC.2003.1243393. KANG Z, PENG C, and CHENG Q. Robust subspace clustering via smoothed rank approximation[J]. IEEE Signal Processing Letters, 2015, 22(11): 2088-2092. doi: 10.1109/ LSP.2015.2460737. LIN Z C, LIU R S, and SU Z X. Linearized alternating direction method with adaptive penalty for low-rank representation[C]. Proceedings of the Advances in Neural Information Processing Systems, Granada, Spain, 2011: 612-620. ZHOU D Y, BOUSQUET O, LAL T N, et al. Learning with local and global consistency[C]. Proceedings of the Advances in Neural Information Processing Systems, Cambridge, UK, 2004: 321-328. LIU B, JING L P, YU J, et al. Robust graph learning via constrained elastic-net regularization[J]. Neurocomputing, 2016, 171: 299-312. doi: 10.1016/j.neucom.2015.06.059. KANG Z, PENG C, CHENG J, et al. Logdet rank minimization with application to subspace clustering[J]. Computational Intelligence and Neuroscience, 2015, 824289, doi: 10.1155/2015/824289. LIN Z C, CHEN M M, and MA Y. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices [R]. UIUC Technical Report UILU-ENG-09-2215, 2009. LIU G C, LIN Z C, and YU Y. Robust subspace segmentation by low-rank representation[C]. Proceedings of the International Conference on Machine Learning, Haifa, Israel, 2010: 663-670. ELHAMIFAR E and VIDAL R. Sparse subspace clustering: algorithm,theory, and applications[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 34(11): 2765-2781. doi: 10.1109/TPAMI.2013.57. -
計(jì)量
- 文章訪問數(shù): 1613
- HTML全文瀏覽量: 143
- PDF下載量: 460
- 被引次數(shù): 0