一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號碼
標(biāo)題
留言內(nèi)容
驗(yàn)證碼

基于自適應(yīng)深度稀疏網(wǎng)絡(luò)的在線跟蹤算法

侯志強(qiáng) 王鑫 余旺盛 戴鉑 金澤芬芬

侯志強(qiáng), 王鑫, 余旺盛, 戴鉑, 金澤芬芬. 基于自適應(yīng)深度稀疏網(wǎng)絡(luò)的在線跟蹤算法[J]. 電子與信息學(xué)報, 2017, 39(5): 1079-1087. doi: 10.11999/JEIT160762
引用本文: 侯志強(qiáng), 王鑫, 余旺盛, 戴鉑, 金澤芬芬. 基于自適應(yīng)深度稀疏網(wǎng)絡(luò)的在線跟蹤算法[J]. 電子與信息學(xué)報, 2017, 39(5): 1079-1087. doi: 10.11999/JEIT160762
HOU Zhiqiang, WANG Xin, YU Wangsheng, DAI Bo, JIN Zefenfen. Online Visual Tracking via Adaptive Deep Sparse Neural Network[J]. Journal of Electronics & Information Technology, 2017, 39(5): 1079-1087. doi: 10.11999/JEIT160762
Citation: HOU Zhiqiang, WANG Xin, YU Wangsheng, DAI Bo, JIN Zefenfen. Online Visual Tracking via Adaptive Deep Sparse Neural Network[J]. Journal of Electronics & Information Technology, 2017, 39(5): 1079-1087. doi: 10.11999/JEIT160762

基于自適應(yīng)深度稀疏網(wǎng)絡(luò)的在線跟蹤算法

doi: 10.11999/JEIT160762 cstr: 32379.14.JEIT160762
基金項(xiàng)目: 

國家自然科學(xué)基金(61473309);陜西省自然科學(xué)基礎(chǔ)研究計劃項(xiàng)目(2015JM6269, 2016JM6050)

Online Visual Tracking via Adaptive Deep Sparse Neural Network

Funds: 

The National Natural Science Foundation of China (61473309), The Project Supported by Natural Science Basic Research Plan in Shaanxi Province (2015JM6269, 2016JM6050)

  • 摘要: 視覺跟蹤中,高效魯棒的特征表達(dá)是解決復(fù)雜環(huán)境下跟蹤漂移問題的關(guān)鍵。該文針對深層網(wǎng)絡(luò)預(yù)訓(xùn)練復(fù)雜費(fèi)時及單網(wǎng)絡(luò)跟蹤易漂移的問題,在粒子濾波框架下,提出一種基于自適應(yīng)深度稀疏網(wǎng)絡(luò)的在線跟蹤算法。該算法利用ReLU激活函數(shù),針對不同類型目標(biāo)構(gòu)建了一種具有自適應(yīng)選擇性的深度稀疏網(wǎng)絡(luò)結(jié)構(gòu),僅通過有限標(biāo)簽樣本的在線訓(xùn)練,就可得到魯棒的跟蹤網(wǎng)絡(luò)。實(shí)驗(yàn)數(shù)據(jù)表明:與當(dāng)前主流的跟蹤算法相比,該算法的平均跟蹤成功率和精度均為最好,且與同樣基于深度學(xué)習(xí)的DLT算法相比分別提高了20.64%和17.72%。在光照變化、相似背景等復(fù)雜環(huán)境下,該算法表現(xiàn)出了良好的魯棒性,能夠有效地解決跟蹤漂移問題。
  • SMEULDERS A W M, CHU D M, CUCCHIARA R, et al. Visual tracking: An experimental survey[J]. IEEE Transactions On Pattern Analysis and Machine Intelligence, 2014, 36(7): 1442-1468. doi: 10.1109/TPAMI.2013.230.
    WANG Naiyan, SHI Jianping, YEUNG Dityan, et al. Understanding and diagnosing visual tracking systems[C]. IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 3101-3109. doi: 1109/ICCV.2015.355.
    ROSS D A, LIM J, LIN R S, et al. Incremental learning for robust visual tracking[J]. International Journal of Computer Vision, 2008, 77(1-3): 125-141. doi: 10.1007/s11263-007- 0075-7.
    BABENKO B, YANG M, and BELONGIE S. Robust object tracking with online multiple instance learning [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(8): 1619-1632. doi: 10.1109/TPAMI.2010.226.
    KALAL Z, MIKOLAJCZYK K, and MATAS J. Tracking- learning-detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7): 1409-1422. doi: 10.1109/TPAMI.2011.239.
    ZHANG Kaihua, ZHANG Lei, and YANG Minghsuan. Real-time compressive tracking[C]. European Conference on Computer Vision, Florence, Italy, 2012: 864-877.
    WU Yi, LIM Jongwoo, and YANG Minghsuan. Online object tracking: A benchmark[C]. IEEE Conference on Computer Vision and Pattern Recognition, Portland, USA, 2013, 9(4): 2411-2418. doi: 10.1109/CVPR.2013.312.
    MA Chao, HUANG Jiabin, YANG Xiaokang, et al. Hierarchical convolutional features for visual tracking[C]. IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 3074-3082.
    NAJAFABADI M M, VILLANUSTRE F, KHOSHGOFTAAR T M, et al. Deep learning applications and challenges in big data analytics[J]. Journal of Big Data, 2015, 2(1): 1-21. doi: 10.1186/s40537-014-0007-7.
    李寰宇, 畢篤彥, 楊源, 等. 基于深度特征表達(dá)與學(xué)習(xí)的視覺跟蹤算法研究[J]. 電子與信息學(xué)報, 2015, 37(9): 2033-2039. doi: 10.11999/JEIT150031.
    LI Huanyu, BI Duyan, YANG Yuan, et al. Research on visual tracking algorithm based on deep feature expression and learning[J]. Journal of Electronics Information Technology, 2015, 37(9): 2033-2039. doi: 10.11999/JEIT150031.
    JIN J, DUNDAR A, BATES J, et al. Tracking with deep neural networks[C]. Annual Conference on Information Sciences and Systems, Baltimore, MD, USA, 2013: 213-217.
    WANG Naiyan and YEUNG Dityan. Learning a deep compact image representation for visual tracking[C]. Advances in Neural Information Processing Systems, South Lake Tahoe, Nevada, USA, 2013: 809-817.
    RUSSAKOVSKY O, DENG J, SU H, et al. Imagenet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3): 211-252. doi: 10.1007/ s11263-015-0816-y.
    侯志強(qiáng), 戴鉑, 胡丹, 等. 基于感知深度神經(jīng)網(wǎng)絡(luò)的視覺跟蹤[J]. 電子與信息學(xué)報, 2016, 38(7): 1616-1623. doi: 10.11999/ JEIT151449.
    HOU Zhiqiang, DAI Bo, HU Dan, et al. Robust visual tracking via perceptive deep neural network[J]. Journal of Electronics Information Technology, 2016, 38(7): 1616-1623. doi: 10.11999/JEIT151449.
    GLOROT X and BENGIO Y. Understanding the difficulty of training deep feedforward neural networks[C]. International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 2010: 249-256.
    GLOROT X, BORDES A, and BENGIO Y. Deep sparse rectifier neural networks[C]. International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 2011: 315-323.
    TOTH L. Phone recognition with deep sparse rectifier neural networks[C]. IEEE International Conference on Acoustics, Speech and Signal Processing, Vancounver, BC, Canada, 2013: 6985-6989. doi: 10.1109/ICASSP.2013.6639016.
    VINCENT P, LAROCHELLE H, LAJOIE I, et al. Stacked denoising autoencoders: Iearning useful representations in a deep network with a local denoising criterion[J]. Journal of Machine Learning Research, 2010, 11(6): 3371-3408.
    HE K, ZHANG X, REN S, et al. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification[C]. IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 1026-1034.
    WU Y W, ZHAO H H, and ZAHNG L Q. Image denoising with retified linear units[J]. Lecture Notes in Computer Science, 2014, 8836: 142-149. doi: 10.1007/978-3-319- 12643-2_18.
    LI X, HU W, SHEN C, et al. A survey of appearance models in visual object tracking[J]. ACM Transactions on Intelligent Systems and Technology, 2013, 4(4): 1-48. doi: 10.1145/ 2508037.2508039.
    WANG F S. Particle filters for visual tracking[C]. International Conference on Advanced Research on Computer Science and Information Engineering, Zhengzhou, China, 2011: 107-112.
    GRABNER H, GRABNER M, and BISCHOF H. Real-time tracking via on-line boosting[C]. British Machine Vision Conference, Edinburgh, Scotland, 2006: 47-56.
    ADAM A, RIVLIN E, and SHIMSHONI I. Robust fragments-based tracking using the integral histogram[C]. IEEE Conference on Computer Vision and Pattern Recognition, New York, USA, 2006: 798-805.
  • 加載中
計量
  • 文章訪問數(shù):  1464
  • HTML全文瀏覽量:  145
  • PDF下載量:  299
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2016-07-20
  • 修回日期:  2016-12-16
  • 刊出日期:  2017-05-19

目錄

    /

    返回文章
    返回