一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級搜索

留言板

尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

姓名
郵箱
手機號碼
標題
留言內容
驗證碼

一種基于正則優(yōu)化的批次繼承極限學習機算法

劉彬 楊有恒 趙志彪 吳超 劉浩然 聞巖

劉彬, 楊有恒, 趙志彪, 吳超, 劉浩然, 聞巖. 一種基于正則優(yōu)化的批次繼承極限學習機算法[J]. 電子與信息學報, 2020, 42(7): 1734-1742. doi: 10.11999/JEIT190502
引用本文: 劉彬, 楊有恒, 趙志彪, 吳超, 劉浩然, 聞巖. 一種基于正則優(yōu)化的批次繼承極限學習機算法[J]. 電子與信息學報, 2020, 42(7): 1734-1742. doi: 10.11999/JEIT190502
Bin LIU, Youheng YANG, Zhibiao ZHAO, Chao WU, Haoran LIU, Yan WEN. A Batch Inheritance Extreme Learning Machine Algorithm Based on Regular Optimization[J]. Journal of Electronics & Information Technology, 2020, 42(7): 1734-1742. doi: 10.11999/JEIT190502
Citation: Bin LIU, Youheng YANG, Zhibiao ZHAO, Chao WU, Haoran LIU, Yan WEN. A Batch Inheritance Extreme Learning Machine Algorithm Based on Regular Optimization[J]. Journal of Electronics & Information Technology, 2020, 42(7): 1734-1742. doi: 10.11999/JEIT190502

一種基于正則優(yōu)化的批次繼承極限學習機算法

doi: 10.11999/JEIT190502 cstr: 32379.14.JEIT190502
基金項目: 河北省自然科學基金(F2019203320, E2018203398)
詳細信息
    作者簡介:

    劉彬:男,1953年生,教授,博士生導師,研究方向為數(shù)據(jù)挖掘、信號估計與識別算法

    楊有恒:男,1996年生,碩士生,研究方向為數(shù)據(jù)挖掘、機器學習

    趙志彪:男,1989年生,博士生,研究方向為人工智能優(yōu)化算法

    吳超:男,1990年生,博士生,研究方向為計算機視覺

    劉浩然:男,1980年生,教授,博士生導師,研究方向為無線傳感器網(wǎng)絡、信號處理

    聞巖:男,1963年生,教授,博士生導師,研究方向為數(shù)據(jù)挖掘、人工智能優(yōu)化算法

    通訊作者:

    劉彬 liubin@ysu.edu.cn

  • 中圖分類號: TN911.7; TP391

A Batch Inheritance Extreme Learning Machine Algorithm Based on Regular Optimization

Funds: The Natural Science Foundation of Hebei Province (F2019203320, E2018203398)
  • 摘要:

    極限學習機(ELM)作為一種新型神經網(wǎng)絡,具有極快的訓練速度和良好的泛化性能。針對極限學習機在處理高維數(shù)據(jù)時計算復雜度高,內存需求巨大的問題,該文提出一種批次繼承極限學習機(B-ELM)算法。首先將數(shù)據(jù)集均分為不同批次,采用自動編碼器網(wǎng)絡對各批次數(shù)據(jù)進行降維處理;其次引入繼承因子,建立相鄰批次之間的關系,同時結合正則化框架構建拉格朗日優(yōu)化函數(shù),實現(xiàn)批次極限學習機數(shù)學建模;最后利用MNIST, NORB和CIFAR-10數(shù)據(jù)集進行測試實驗。實驗結果表明,所提算法具有較高的分類精度,并且有效降低了計算復雜度和內存消耗。

  • 圖  1  ELM網(wǎng)絡結構圖示意圖

    圖  2  B-ELM訓練過程示意圖

    圖  3  數(shù)據(jù)集圖像示例

    圖  4  節(jié)點數(shù)L對測試精度的影響

    圖  5  正則化參數(shù)C對測試精度的影響

    圖  6  MNIST數(shù)據(jù)集算法性能比較

    圖  7  NORB數(shù)據(jù)集算法性能比較

    圖  8  CIFAR-10數(shù)據(jù)集算法性能比較

    表  1  不同數(shù)據(jù)集上的性能比較

    分類方法MNISTNORBCIFAR-10
    精度(%)訓練時間(s)精度(%)訓練時間(s)精度(%)訓練時間(s)
    SAE98.604042.3686.286438.5643.3760514.26
    SDA98.723892.2687.626572.1443.6187289.59
    DBM99.0514505.1489.6518496.6443.1290123.53
    ML-ELM98.2151.8388.9178.3645.4274.06
    H-ELM99.1228.9791.2842.7450.2162.76
    B-ELM99.4342.6791.9055.9650.3869.06
    下載: 導出CSV
  • HUANG Guangbin, ZHU Qinyu, and SIEW C K. Extreme learning machine: Theory and applications[J]. Neurocomputing, 2006, 70(1/3): 489–501. doi: 10.1016/j.neucom.2005.12.126
    李佩佳, 石勇, 汪華東, 等. 基于有序編碼的核極限學習順序回歸模型[J]. 電子與信息學報, 2018, 40(6): 1287–1293. doi: 10.11999/JEIT170765

    LI Peijia, SHI Yong, WANG Huadong, et al. Ordered code-based kernel extreme learning machine for ordinal regression[J]. Journal of Electronics &Information Technology, 2018, 40(6): 1287–1293. doi: 10.11999/JEIT170765
    HUANG Guangbin, ZHOU Hongming, DING Xiaojian, et al. Extreme learning machine for regression and multiclass classification[J]. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) , 2012, 42(2): 513–529. doi: 10.1109/tsmcb.2011.2168604
    WANG Yongchang and ZHU Ligu. Research and implementation of SVD in machine learning[C]. The 2017 16th IEEE/ACIS International Conference on Computer and Information Science, Wuhan, China, 2017: 471–475. doi: 10.1109/ICIS.2017.7960038.
    CASTA?O A, FERNáNDEZ-NAVARRO F, and HERVáS-MARTíNEZ C. PCA-ELM: A robust and pruned extreme learning machine approach based on principal component analysis[J]. Neural Processing Letters, 2013, 37(3): 377–392. doi: 10.1007/s11063-012-9253-x
    ZONG Weiwei, HUANG Guangbin, and CHEN Yiqiang. Weighted extreme learning machine for imbalance learning[J]. Neurocomputing, 2013, 101: 229–242. doi: 10.1016/j.neucom.2012.08.010
    ZHAO Rui and MAO Kezhi. Semi-random projection for dimensionality reduction and extreme learning machine in high-dimensional space[J]. IEEE Computational Intelligence Magazine, 2015, 10(3): 30–41. doi: 10.1109/MCI.2015.2437316
    LUO Xiong, XU Yang, WANG Weiping, et al. Towards enhancing stacked extreme learning machine with sparse autoencoder by correntropy[J]. Journal of the Franklin Institute, 2018, 355(4): 1945–1966. doi: 10.1016/j.jfranklin.2017.08.014
    WU Shuang, LI Guoqi, DENG Lei, et al. L1-norm batch normalization for efficient training of deep neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30(7): 2043–2051. doi: 10.1109/TNNLS.2018.2876179
    LI Yanghao, WANG Naiyan, SHI Jianping, et al. Adaptive batch normalization for practical domain adaptation[J]. Pattern Recognition, 2018, 80: 109–117. doi: 10.1016/j.patcog.2018.03.005
    LIANG Nanying, HUANG Guangbin, SARATCHANDRAN P, et al. A fast and accurate online sequential learning algorithm for feedforward networks[J]. IEEE Transactions on Neural Networks, 2006, 17(6): 1411–1423. doi: 10.1109/TNN.2006.880583
    HUANG Guangbin. What are extreme learning machines? Filling the gap between frank Rosenblatt’s dream and john von Neumann’s puzzle[J]. Cognitive Computation, 2015, 7(3): 263–278. doi: 10.1007/s12559-015-9333-0
    YI Yugen, QIAO Shaojie, ZHOU Wei, et al. Adaptive multiple graph regularized semi-supervised extreme learning machine[J]. Soft Computing, 2018, 22(11): 3545–3562. doi: 10.1007/s00500-018-3109-x
    CHENG Kai and LU Zhenzhou. Adaptive sparse polynomial chaos expansions for global sensitivity analysis based on support vector regression[J]. Computers & Structures, 2018, 194: 86–96. doi: 10.1016/j.compstruc.2017.09.002
    HINTON G E and SALAKHUTDINOV R R. Reducing the dimensionality of data with neural networks[J]. Science, 2006, 313(5786): 504–507. doi: 10.1126/science.1127647
    VINCENT P, LAROCHELLE H, BENGIO Y, et al. Extracting and composing robust features with denoising autoencoders[C]. The 25th International Conference on Machine Learning, Helsinki, Finland, 2008: 1096–1103. doi: 10.1145/1390156.1390294.
    SALAKHUTDINOV R and HINTON G. An efficient learning procedure for deep Boltzmann machines[J]. Neural Computation, 2012, 24(8): 1967–2006. doi: 10.1162/NECO_a_00311
    CAMBRIA E, HUANG Guangbin, KASUN L L C, et al. Extreme learning machines[trends & controversies][J]. IEEE Intelligent Systems, 2013, 28(6): 30–59. doi: 10.1109/MIS.2013.140
    TANG Jiexiong, DENG Chenwei, and HUANG Guangbin. Extreme learning machine for multilayer perceptron[J]. IEEE Transactions on Neural Networks and Learning Systems, 2016, 27(4): 809–821. doi: 10.1109/TNNLS.2015.2424995
  • 加載中
圖(8) / 表(1)
計量
  • 文章訪問數(shù):  3268
  • HTML全文瀏覽量:  1718
  • PDF下載量:  99
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2019-07-05
  • 修回日期:  2019-12-12
  • 網(wǎng)絡出版日期:  2019-12-20
  • 刊出日期:  2020-07-23

目錄

    /

    返回文章
    返回