一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機號碼
標(biāo)題
留言內(nèi)容
驗證碼

基于雙錯測度的極限學(xué)習(xí)機選擇性集成方法

夏平凡 倪志偉 朱旭輝 倪麗萍

夏平凡, 倪志偉, 朱旭輝, 倪麗萍. 基于雙錯測度的極限學(xué)習(xí)機選擇性集成方法[J]. 電子與信息學(xué)報, 2020, 42(11): 2756-2764. doi: 10.11999/JEIT190617
引用本文: 夏平凡, 倪志偉, 朱旭輝, 倪麗萍. 基于雙錯測度的極限學(xué)習(xí)機選擇性集成方法[J]. 電子與信息學(xué)報, 2020, 42(11): 2756-2764. doi: 10.11999/JEIT190617
Pingfan XIA, Zhiwei NI, Xuhui ZHU, Liping NI. Selective Ensemble Method of Extreme Learning Machine Based on Double-fault Measure[J]. Journal of Electronics & Information Technology, 2020, 42(11): 2756-2764. doi: 10.11999/JEIT190617
Citation: Pingfan XIA, Zhiwei NI, Xuhui ZHU, Liping NI. Selective Ensemble Method of Extreme Learning Machine Based on Double-fault Measure[J]. Journal of Electronics & Information Technology, 2020, 42(11): 2756-2764. doi: 10.11999/JEIT190617

基于雙錯測度的極限學(xué)習(xí)機選擇性集成方法

doi: 10.11999/JEIT190617 cstr: 32379.14.JEIT190617
基金項目: 國家自然科學(xué)基金(91546108, 71521001),安徽省自然科學(xué)基金 (1908085QG298, 1908085MG232),過程優(yōu)化與智能決策教育部重點實驗室開放課題,中央高?;究蒲袠I(yè)務(wù)費專項資金(JZ2019HGTA0053, JZ2019HGBZ0128)
詳細(xì)信息
    作者簡介:

    夏平凡:女,1994年生,博士生,研究方向為機器學(xué)習(xí)、人工智能和集成學(xué)習(xí)等

    倪志偉:男,1963年生,教授,博士生導(dǎo)師,研究方向為人工智能、機器學(xué)習(xí)和云計算

    朱旭輝:男,1991年生,講師,碩士生導(dǎo)師,研究方向為進化計算和機器學(xué)習(xí)

    倪麗萍:女,1981年生,副教授,碩士生導(dǎo)師,研究方向為分形數(shù)據(jù)挖掘、人工智能和機器學(xué)習(xí)

    通訊作者:

    朱旭輝 zhuxuhui@hfut.edu.cn

  • 中圖分類號: TP391

Selective Ensemble Method of Extreme Learning Machine Based on Double-fault Measure

Funds: The National Natural Science Foundation of China (91546108, 71521001), The Anhui Provincial Natural Science Foundation (1908085QG298, 1908085MG232), The Open Research Fund Program of Key Laboratory of Process Optimization and Intelligent Decision-making, Ministry of Education, The Fundamental Research Funds for the Central Universities (JZ2019HGTA0053, JZ2019HGBZ0128)
  • 摘要: 極限學(xué)習(xí)機(ELM)具有學(xué)習(xí)速度快、易實現(xiàn)和泛化能力強等優(yōu)點,但單個ELM的分類性能不穩(wěn)定。集成學(xué)習(xí)可以有效地提高單個ELM的分類性能,但隨著數(shù)據(jù)規(guī)模和基ELM數(shù)目的增加,計算復(fù)雜度會大幅度增加,消耗大量的計算資源。針對上述問題,該文提出一種基于雙錯測度的極限學(xué)習(xí)機選擇性集成方法(DFSEE),同時從理論和實驗的角度進行了詳細(xì)分析。首先,運用bootstrap 方法重復(fù)抽取訓(xùn)練集,獲得多個訓(xùn)練子集,在ELM上進行獨立訓(xùn)練,得到多個具有較大差異性的基ELM,構(gòu)成基ELM池;其次,計算出每個基ELM的雙錯測度,將基ELM按照雙錯測度的大小進行升序排序;最后,采用多數(shù)投票算法,根據(jù)順序?qū)⒒鵈LM逐個累加集成,直至集成精度最優(yōu),即獲得基ELM最優(yōu)子集成,并分析了其理論基礎(chǔ)。在10個UCI數(shù)據(jù)集上的實驗結(jié)果表明,較其他方法使用了更小規(guī)模的基ELM,獲得了更高的集成精度,同時表明了其有效性和顯著性。
  • 圖  1  在不同基ELM集成規(guī)模下基于成對差異性測度排序集成分類準(zhǔn)確率的趨勢

    表  1  兩個分類器的聯(lián)合分布

    ${f_i}({x_k}) = {y_k}$${f_i}({x_k}) \ne {y_k}$
    ${f_j}({x_k}) = {y_k}$$a$$b$
    ${f_j}({x_k}) \ne {y_k}$$c$$d$
    下載: 導(dǎo)出CSV

    表  2  UCI數(shù)據(jù)集

    數(shù)據(jù)集實例個數(shù)屬性個數(shù)類別
    Heart270132
    Cleveland303135
    Bupa34562
    Wholesale44072
    Diabetes76882
    German1000202
    QSAR1055412
    CMC147393
    Spambase4601572
    Wineq-w4898117
    下載: 導(dǎo)出CSV

    表  3  在不同規(guī)模基ELM (100, 200, 300)下的集成分類準(zhǔn)確率(%)

    數(shù)據(jù)集100200300
    DFSEE最高平均最低DFSEE最高平均最低DFSEE最高平均最低
    Heart80.4875.0063.6051.1482.4876.3363.6349.2482.2476.5763.6848.57
    Cleveland57.0155.8748.6638.8657.6156.6248.6838.1157.9156.8248.6837.71
    Bupa75.3770.6760.2148.1876.9571.5160.1547.0977.4072.1860.2346.49
    Wholesale94.0489.5682.7874.1994.7890.2682.7873.4895.1590.5982.7373.04
    Diabetes71.8870.6261.8352.6473.1071.4961.7351.0373.7771.8161.7450.65
    German77.4075.1769.6163.6378.0876.1269.6362.8378.5876.4069.6462.62
    QSAR86.2682.6774.4565.6387.7883.6374.5264.9288.2883.8974.4963.98
    CMC62.9960.4554.2146.5763.4161.0354.2545.9263.8861.3354.2345.46
    Spambase80.7877.5770.1363.3281.5578.1770.1262.7981.7078.4270.1362.53
    Wineq-w51.3850.8046.9744.5251.7351.0346.9444.2151.9051.2046.9444.07
    下載: 導(dǎo)出CSV

    表  4  在不同規(guī)?;鵈LM (100, 200, 300)下DFSEE與Bagging分類準(zhǔn)確率對比分析(%)

    數(shù)據(jù)集100200300
    Bagging本文DFSEEnBagging本文DFSEEnBagging本文DFSEEn
    Heart72.1080.481371.6782.481171.7182.2411
    Cleveland49.2557.01449.2557.61649.2557.916
    Bupa65.6175.371264.1476.951264.9877.4015
    Wholesale86.4494.04886.0094.781086.1195.1511
    Diabetes63.7971.88763.5173.10763.6373.778
    German74.1377.401474.4078.08974.3878.589
    QSAR80.2286.26980.4187.78880.4788.289
    CMC58.2262.99958.4463.411258.4563.8813
    Spambase73.3480.781273.3781.551373.4681.7011
    Wineq-w46.5851.38846.5751.731146.5651.9014
    下載: 導(dǎo)出CSV

    表  5  與其他方法在集成精度(%)和集成規(guī)模方面對比分析(基ELM規(guī)模200)

    數(shù)據(jù)集本文DFSEEnAGOBnPOBEnMOAGnEP-FPnSCG-Pn
    Heart82.481174.144977.529674.864374.389575.2438
    Cleveland57.61654.432251.0913250.852549.259556.251
    Bupa76.951269.933772.959969.475965.896676.8948
    Wholesale94.781089.673692.749988.592786.119687.859
    Diabetes73.10766.032668.9910266.275263.738965.3058
    German78.08975.153676.609675.303874.478675.1854
    QSAR87.78883.482484.3710083.943280.438882.0237
    CMC63.411259.634760.9210359.675158.469759.5167
    Spambase81.551376.183279.129776.476776.649376.6658
    Wineq-w51.731150.372349.489348.103448.609650.9846
    下載: 導(dǎo)出CSV

    表  6  與其他方法在運行時間方面的對比分析(s)

    數(shù)據(jù)集本文DFSEEAGOBPOBEMOAGEP-FPSCG-P
    Heart0.8010.960.790.8718.240.86
    Cleveland0.7722.360.731.102.771.10
    Bupa0.8613.970.850.9541.260.95
    Wholesale1.1617.261.151.2721.971.26
    Diabetes1.2917.951.281.4030.401.39
    German1.7912.581.781.8611.581.86
    QSAR2.2914.012.292.3723.552.37
    CMC2.2524.442.212.6230.852.61
    Spambase8.5443.868.528.80110.468.78
    Wineq-w7.7179.187.588.8548.568.82
    下載: 導(dǎo)出CSV
  • HUANG Guangbin, ZHU Qinyu, and SIEW C K. Extreme learning machine: Theory and applications[J]. Neurocomputing, 2006, 70(1/3): 489–501. doi: 10.1016/j.neucom.2005.12.126
    YANG Yifan, ZHANG Hong, YUAN D, et al. Hierarchical extreme learning machine based image denoising network for visual Internet of Things[J]. Applied Soft Computing, 2019, 74: 747–759. doi: 10.1016/j.asoc.2018.08.046
    吳超, 李雅倩, 張亞茹, 等. 用于表示級特征融合與分類的相關(guān)熵融合極限學(xué)習(xí)機[J]. 電子與信息學(xué)報, 2020, 42(2): 386–393. doi: 10.11999/JEIT190186

    WU Chao, LI Yaqian, ZHANG Yaru, et al. Correntropy-based fusion extreme learning machine for representation level feature fusion and classification[J]. Journal of Electronics &Information Technology, 2020, 42(2): 386–393. doi: 10.11999/JEIT190186
    陸慧娟, 安春霖, 馬小平, 等. 基于輸出不一致測度的極限學(xué)習(xí)機集成的基因表達數(shù)據(jù)分類[J]. 計算機學(xué)報, 2013, 36(2): 341–348. doi: 10.3724/SP.J.1016.2013.00341

    LU Huijuan, AN Chunlin, MA Xiaoping, et al. Disagreement measure based ensemble of extreme learning machine for gene expression data classification[J]. Chinese Journal of Computers, 2013, 36(2): 341–348. doi: 10.3724/SP.J.1016.2013.00341
    LAN Y, SOH Y C, and HUANG Guangbin. Ensemble of online sequential extreme learning machine[J]. Neurocomputing, 2009, 72(13/15): 3391–3395. doi: 10.1016/j.neucom.2009.02.013
    KSIENIEWICZ P, KRAWCZYK B, and WO?NIAK M M. Ensemble of Extreme Learning Machines with trained classifier combination and statistical features for hyperspectral data[J]. Neurocomputing, 2018, 271: 28–37. doi: 10.1016/j.neucom.2016.04.076
    李煒, 李全龍, 劉政怡. 基于加權(quán)的K近鄰線性混合顯著性目標(biāo)檢測[J]. 電子與信息學(xué)報, 2019, 41(10): 2442–2449. doi: 10.11999/JEIT190093

    LI Wei, LI Quanlong, and LIU Zhengyi. Salient object detection using weighted K-nearest neighbor linear blending[J]. Journal of Electronics &Information Technology, 2019, 41(10): 2442–2449. doi: 10.11999/JEIT190093
    YKHLEF H and BOUCHAFFRA D. An efficient ensemble pruning approach based on simple coalitional games[J]. Information Fusion, 2017, 34: 28–42. doi: 10.1016/j.inffus.2016.06.003
    CAO Jingjing, LI Wenfeng, MA Congcong, et al. Optimizing multi-sensor deployment via ensemble pruning for wearable activity recognition[J]. Information Fusion, 2018, 41: 68–79. doi: 10.1016/j.inffus.2017.08.002
    MARTíNEZ-MU?OZ G, HERNáNDEZ-LOBATO D, and SUáREZ A. An analysis of ensemble pruning techniques based on ordered aggregation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(2): 245–259. doi: 10.1109/TPAMI.2008.78
    MARTíNEZ-MU?OZ G and SUáREZ A. Pruning in ordered bagging ensembles[C]. The 23rd International Conference on Machine learning, New York, USA, 2006: 609-616. doi: 10.1145/1143844.1143921.
    GUO Li and BOUKIR S. Margin-based ordered aggregation for ensemble pruning[J]. Pattern Recognition Letters, 2013, 34(6): 603–609. doi: 10.1016/j.patrec.2013.01.003
    DAI Qun, ZHANG Ting, and LIU Ningzhong. A new reverse reduce-error ensemble pruning algorithm[J]. Applied Soft Computing, 2015, 28: 237–249. doi: 10.1016/j.asoc.2014.10.045
    ZHOU Zhihua, WU Jianxin, and TANG Wei. Ensembling neural networks: Many could be better than all[J]. Artificial Intelligence, 2002, 137(1/2): 239–263. doi: 10.1016/S0004-3702(02)00190-X
    CAVALCANTI G D C, OLIVEIRA L S, MOURA T J M, et al. Combining diversity measures for ensemble pruning[J]. Pattern Recognition Letters, 2016, 74: 38–45. doi: 10.1016/j.patrec.2016.01.029
    MAO Shasha, CHEN Jiawei, JIAO Licheng, et al. Maximizing diversity by transformed ensemble learning[J]. Applied Soft Computing, 2019, 82: 105580. doi: 10.1016/j.asoc.2019.105580
    TANG E K, SUGANTHAN P N, and YAO Xin. An analysis of diversity measures[J]. Machine Learning, 2006, 65(1): 247–271. doi: 10.1007/s10994-006-9449-2
    GIACINTO G and ROLI F. Design of effective neural network ensembles for image classification purposes[J]. Image and Vision Computing, 2001, 19(9/10): 699–707. doi: 10.1016/S0262-8856(01)00045-2
    FUSHIKI T. Estimation of prediction error by using K-fold cross-validation[J]. Statistics and Computing, 2011, 21(2): 137–146. doi: 10.1007/s11222-009-9153-8
    ZHOU Hongfa, ZHAO Xuehan, and WANG Xiao. An effective ensemble pruning algorithm based on frequent patterns[J]. Knowledge-Based Systems, 2014, 56: 79–85. doi: 10.1016/j.knosys.2013.10.024
  • 加載中
圖(1) / 表(6)
計量
  • 文章訪問數(shù):  2239
  • HTML全文瀏覽量:  619
  • PDF下載量:  51
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2019-08-12
  • 修回日期:  2020-06-21
  • 網(wǎng)絡(luò)出版日期:  2020-07-17
  • 刊出日期:  2020-11-16

目錄

    /

    返回文章
    返回