一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機號碼
標(biāo)題
留言內(nèi)容
驗證碼

基于多線索的運動手部分割方法

阮曉鋼 林佳 于乃功 朱曉慶 OuattaraSie

阮曉鋼, 林佳, 于乃功, 朱曉慶, OuattaraSie. 基于多線索的運動手部分割方法[J]. 電子與信息學(xué)報, 2017, 39(5): 1088-1095. doi: 10.11999/JEIT160730
引用本文: 阮曉鋼, 林佳, 于乃功, 朱曉慶, OuattaraSie. 基于多線索的運動手部分割方法[J]. 電子與信息學(xué)報, 2017, 39(5): 1088-1095. doi: 10.11999/JEIT160730
RUAN Xiaogang, LIN Jia, YU Naigong, ZHU Xiaoqing, OUATTARA Sie. Moving Hand Segmentation Based on Multi-cues[J]. Journal of Electronics & Information Technology, 2017, 39(5): 1088-1095. doi: 10.11999/JEIT160730
Citation: RUAN Xiaogang, LIN Jia, YU Naigong, ZHU Xiaoqing, OUATTARA Sie. Moving Hand Segmentation Based on Multi-cues[J]. Journal of Electronics & Information Technology, 2017, 39(5): 1088-1095. doi: 10.11999/JEIT160730

基于多線索的運動手部分割方法

doi: 10.11999/JEIT160730 cstr: 32379.14.JEIT160730
基金項目: 

國家自然科學(xué)基金(61375086),北京市教育委員會科技計劃重點項目(KZ201610005010)

Moving Hand Segmentation Based on Multi-cues

Funds: 

The National Natural Science Foundation of China (61375086), The Key Project of ST Plan of Beijing Municipal Commission of Education (KZ201610005010)

  • 摘要: 分割運動手部時,為了不依賴不合理的假設(shè)和解決手臉遮擋問題,該文提出一種基于膚色、灰度、深度和運動線索的分割方法。首先,利用灰度與深度光流的方差信息來自適應(yīng)提取運動感興趣區(qū)域(Motion Region of Interest, MRoI),以定位人體運動部位。然后,在MRoI中檢測滿足膚色與自適應(yīng)運動約束的角點作為皮膚種子點。接著,根據(jù)膚色、深度與運動準(zhǔn)則將皮膚種子點生長為候選手部區(qū)域。最后,通過邊緣深度梯度、骨架提取和最優(yōu)路徑搜索從候選手部區(qū)域中分割出運動手部區(qū)域。實驗結(jié)果表明,在不同情形下,特別是手臉遮擋時,該方法可以有效和準(zhǔn)確地分割出運動手部區(qū)域。
  • SHIEH Mingyuan, HSIEH Chungyu, and HSIEH Tsungmin. Fuzzy visual detection for human-robot interaction[J]. Engineering Computations, 2014, 31(8): 1709-1719. doi: 10.1108/EC-11-2012-0292.
    張旭東, 楊靜, 胡良梅, 等. 基于多層運動歷史圖像的飛行時間相機人體運動識別[J]. 電子與信息學(xué)報, 2014, 36(5): 1139-1144. doi: 10.3724/SP.J.1146.2013.01003.
    ZHANG Xudong, YANG Jing, HU Liangmei, et al. Human activity recognition using Multi-Layered motion history images with Time-Of-Flight camera[J]. Journal of Electronic Information Technology, 2014, 36(5): 1139-1144. doi: 10.3724/SP.J.1146.2013.01003.
    WAN Jun, RUAN Qiuqi, LI Wei, et al. One-shot learning gesture recognition from RGB-D Data using Bag of Features [J]. Journal of Machine Learning Research, 2013, 14: 2549-2582.
    WAN Jun, GUO Guodong, and LI S Z. Explore efficient local features from RGB-D data for one-shot learning gesture recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(8): 1626-1639. doi: 10.1109/ TPAMI.2015.251347.
    HERNANDEZ-VELA A, BAUTISTA M A, PEREZ-SALA X, et al. Probability-based dynamic time warping and bag-of- visual-and-depth -words for human gesture recognition in RGB-D[J]. Pattern Recognition Letters, 2014, 50[JL2]: 112-121. doi: 10.1016/j.patrec.2013.09.009.
    MONNIER C, GERMAN S, and OST A. A multi-scale boosted detector for efficient and robust gesture recognition [C]. Proceedings of the European Conference on Computer Vision Workshops, Zurich, 2014: 491-502. doi: 10.1007/978- 3-319-16178-5_34.
    PFISTER T, CHARLES J, and ZISSERMAN A. Domain- adaptive discriminative one-shot learning of gestures[C]. Proceedings of the European Conference on Computer Vision, Zurich, 2014: 814-829. doi: 10.1007/978-3-319-10599-4_52.
    YANG H D and LEE S W. Simultaneous spotting of signs and fingerspellings based on hierarchical conditional random fields and boostmap embeddings[J]. Pattern Recognition, 2010, 43(8): 2858-2870. doi: 10.1016/ j.patcog.2010.03.007.
    FANELLO S R, GORI I, METTA G, et al. Keep it simple and sparse: Real-time action recognition[J]. Journal of Machine Learning Research, 2013, 14: 2617-2640.
    LUI Yuiman. Human gesture recognition on product manifolds[J]. Journal of Machine Learning Research, 2012, 13: 3297-3321.
    KONECNY J and HAGARA M. One-shot-learning gesture recognition using HOG-HOF Features[J]. Journal of Machine Learning Research, 2014, 15: 2513-2532.
    JADOOKI S, MOHAMAD D, TANZILA S, et al. Fused features mining for depth-based hand gesture recognition to classify blind human communication[J]. Neural Computing and Applications, 2016: 1-10. doi: 10.1007/s00521-016- 2244- 5.
    DOMINIO F, DONADEO M, and ZANUTTIGH P. Combining multiple depth-based descriptors for hand gesture recognition[J]. Pattern Recognition Letters, 2014, 50: 101-111. doi: 10.1016/j.patrec.2013.10.010.
    LIANG Hui, YUAN Junsong, and THALMANN D. 3D fingertip and palm tracking in depth image sequences[C]. Proceedings of the ACM International Conference on Multimedia, Nara, 2012: 785-788. doi: 10.1145/2393347. 2396312.
    JIANG Hairong, DUERSTOCK B S, and WACHS J P. A machine vision-based gestural interface for people with upper extremity physical impairments[J]. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2014, 44(5): 630-641. doi: 10.1109/TSMC.2013.2270226.
    CHENG Hong, DAI Zhongjun, LIU Zicheng, et al. An image-to-class dynamic time warping approach for both 3D static and trajectory hand gesture recognition[J]. Pattern Recognition, 2016, 55: 137-147. doi: 10.1016/j.patcog.2016.01. 011.
    WANG Chong, LIU Zhong, and CHAN Shingchow. Superpixel-based hand gesture recognition with Kinect depth camera[J]. IEEE Transactions on Multimedia, 2015, 17(1): 29-39. doi: 10.1109/TMM.2014.2374357.
    CHEMYSHOV V and MESTETSKIY L. Real-time hand detection using continuous skeletons[J]. Pattern Recognition and Image Analysis, 2016, 26(2): 368-373. doi: 10.1134/ S1054661816020048.
    南棟, 畢篤彥, 查宇飛, 等. 基于參數(shù)估計的無參考型圖像質(zhì)量評價算法[J]. 電子與信息學(xué)報, 2013, 35(9): 2066-2072. doi: 10.3724/SP.J.1146.2012.01652.
    NAN Dong, BI Duyan, ZHA Yufei, et al. A no-reference image quality assessment method based on parameter estimation[J]. Journal of Electronic Information Technology, 2013, 35(9): 2066-2072. doi: 10.3724/SP.J.1146. 2012.01652.
    JONES M J and REHG J M. Statistical color models with application to skin detection[J]. International Journal of Computer Vision, 2002, 46(1): 81-96. doi: 10.1023/A: 1013200319198.
    STERGIOPOULOU E, SGOUROPOULOS K, NIKOLAOU N, et al. Real time hand detection in a complex background [J]. Engineering Applications of Artificial Intelligence, 2014, 35[JL3]: 54-70. doi: 10.1016/j.engappai.2014.06.006.
    LIN Jia, RUAN Xiaogang, YU Naigong, et al. One-shot learning gesture recognition based on improved 3D SMoSIFT feature descriptor from RGB-D videos[C]. Proceedings of the Chinese Control and Decision Conference, Qingdao, 2015: 4911-4916. doi: 10.1109/CCDC.2015.7162803.
    FARNEBACK G. Two-frame motion estimation based on polynomial expansion[C]. Proceedings of the Scandinavian Conference on Image Analysis, Halmstad, 2003: 363-370. doi: 10.1007/3-540-45103-X_50.
    SHI J and TOMASI G. Good feature to track[C]. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Seattle, 1994: 593-600. doi: 10.1109/CVPR.1994.323794.
    GONZALEZ R C and WOODS R E. Digital Image Processing[M]. Beijing: Publishing House of Electronic Industry, 2008: 543-545.
    CALABI L and HARNETT W E. Shape recognition, prairie fires, convex deficiencies and skeletons[J]. The American Mathematical Monthly, 1968, 75(4): 335-342. doi: 10.2307/ 2313409.
    DIJKSTRA E W. A note on two problems in connexion with graphs[J]. Numerische Mathematik, 1959, 1(1): 269-271. doi: 10.1007/BF01386390.
  • 加載中
計量
  • 文章訪問數(shù):  1098
  • HTML全文瀏覽量:  125
  • PDF下載量:  372
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2016-07-08
  • 修回日期:  2017-01-03
  • 刊出日期:  2017-05-19

目錄

    /

    返回文章
    返回