利用可選擇多尺度圖卷積網(wǎng)絡(luò)的骨架行為識別
doi: 10.11999/JEIT240702 cstr: 32379.14.JEIT240702
-
1.
江南大學(xué)機(jī)械工程學(xué)院 無錫 214122
-
2.
江南大學(xué)江蘇省食品先進(jìn)制造裝備技術(shù)重點實驗室 無錫 214122
Skeleton-based Action Recognition with Selective Multi-scale Graph Convolutional Network
-
1.
School of Mechanical Engineering, Jiangnan University, Wuxi 214122, China
-
2.
Jiangsu Key Laboratory of Advanced Food Manufacturing Equipment and Technology, Jiangnan University, Wuxi 214122, China
-
摘要: 針對目前骨架行為識別方法忽視骨架關(guān)節(jié)點多尺度依賴關(guān)系和無法合理利用卷積核進(jìn)行時間建模的問題,該文提出了一種可選擇多尺度圖卷積網(wǎng)絡(luò)(SMS-GCN)的行為識別模型。首先,介紹了人體骨架圖的構(gòu)建原理和通道拓?fù)浼?xì)化圖卷積網(wǎng)絡(luò)的結(jié)構(gòu);其次,構(gòu)建成對關(guān)節(jié)鄰接矩陣和多關(guān)節(jié)鄰接矩陣以生成多尺度通道拓?fù)浼?xì)化鄰接矩陣,并引入圖卷積網(wǎng)絡(luò),進(jìn)一步提出多尺度圖卷積(MS-GC)模塊,以期實現(xiàn)對骨架關(guān)節(jié)點的多尺度依賴關(guān)系的建模;然后,基于多尺度時序卷積和可選擇大核網(wǎng)絡(luò),提出可選擇多尺度時序卷積(SMS-TC)模塊,以期實現(xiàn)對有用的時間上下文特征的充分提取,同時結(jié)合MS-GC和SMS-TC模塊,進(jìn)而提出可選擇多尺度圖卷積網(wǎng)絡(luò)模型并在多支流數(shù)據(jù)輸入下進(jìn)行訓(xùn)練;最后,在NTU-RGB+D和NTU-RGB+D 120數(shù)據(jù)集上進(jìn)行大量實驗,實驗結(jié)果表明,該模型能夠捕獲更多的關(guān)節(jié)特征和學(xué)習(xí)有用的時間信息,具有優(yōu)異的準(zhǔn)確率和泛化能力。
-
關(guān)鍵詞:
- 骨架行為識別 /
- 圖卷積網(wǎng)絡(luò) /
- 多尺度通道拓?fù)浼?xì)化鄰接矩陣 /
- 可選擇多尺度時序卷積 /
- 可選擇多尺度圖卷積網(wǎng)絡(luò)
Abstract:Objective Human action recognition plays a key role in computer vision and has gained significant attention due to its broad range of applications. Skeleton data, derived from human action samples, is particularly robust to variations in camera viewpoint, illumination, and background occlusion, offering advantages over depth image and video data. Recent advancements in skeleton-based action recognition using Graph Convolutional Networks (GCNs) have demonstrated effective extraction of the topological relationships within skeleton data. However, limitations remain in some current approaches employing GCNs: (1) Many methods focus on the discriminative dependencies between pairs of joints, failing to effectively capture the multi-scale discriminative dependencies across the entire skeleton. (2) Some temporal modeling methods use dilated convolutions for simple feature fusion, but do not employ convolutional kernels in a manner suitable for effective temporal modeling. To address these challenges, a selective multi-scale GCN is proposed for action recognition, designed to capture more joint features and learn valuable temporal information. Methods The proposed model consists of two key modules: a multi-scale graph convolution module and a selective multi-scale temporal convolution module. First, the multi-scale graph convolution module serves as the primary spatial modeling component. It generates a multi-scale, channel-wise topology refinement adjacency matrix to enhance the model’s ability to learn multi-scale discriminative dependencies of skeleton joints, thereby capturing more joint features. Specifically, the pairwise joint adjacency matrix is used to capture the interactive relationships between joint pairs, enabling the extraction of local motion details. Additionally, the multi-joint adjacency matrix emphasizes the overall action feature changes, improving the model’s spatial representation of the skeleton data. Second, the selective multi-scale temporal convolution module is designed to capture valuable temporal contextual information. This module comprises three stages: feature extraction, temporal selection, and feature fusion. In the feature extraction stage, convolution and max-pooling operations are applied to obtain temporal features at different scales. Once the multi-scale temporal features are extracted, the temporal selection stage uses global max and average pooling to select salient features while preserving key details. This results in the generation of temporal selection masks without directly fusing temporal features across scales, thus reducing redundancy. In the feature fusion stage, the output temporal feature is obtained by weighted fusion of the temporal features and the selection masks. Finally, by combining the multi-scale graph convolution module with the selective multi-scale temporal convolution module, the proposed model extracts multi-stream data from skeleton data, generating various prediction scores. These scores are then fused through weighted summation to produce the final prediction outcome. Results and Discussions Extensive experiments are conducted on two large-scale datasets: NTU-RGB+D and NTU-RGB+D 120, demonstrating the effectiveness and strong generalization performance of the proposed model. When the convolution kernel size in the multi-scale graph convolution module is set to 3, the model performs optimally, capturing more representative joint features ( Table 1 ). The results (Table 4 ) show that the temporal selection stage is critical within the selective multi-scale temporal convolution module, significantly enhancing the model’s ability to extract temporal contextual information. Additionally, ablation studies (Table 5 ) confirm the effectiveness of each component in the proposed model, highlighting their contributions to improving recognition performance. The results (Tables 6 and7 ) demonstrate that the proposed model outperforms state-of-the-art methods, achieving superior recognition accuracy and strong generalization capabilities.Conclusions This study presents a selective multi-scale GCN model for skeleton-based action recognition. The multi-scale graph convolution module effectively captures the multi-scale discriminative dependencies of skeleton joints, enabling the model to fully extract more joint features. By selecting appropriate temporal convolution kernels, the selective multi-scale temporal convolution module extracts and fuses temporal contextual information, thereby emphasizing useful temporal features. Experimental results on the NTU-RGB+D and NTU-RGB+D 120 datasets demonstrate that the proposed model achieves excellent accuracy and robust generalization performance. -
表 1 不同卷積核尺寸的模型準(zhǔn)確率對比(%)
模型 Top-1 Top-5 SMS-GCN(k=3) 95.09 99.41 SMS-GCN(k=5) 95.00 99.43 SMS-GCN(k=7) 94.99 99.40 SMS-GCN(k=9) 94.93 99.36 下載: 導(dǎo)出CSV
表 2 不同卷積核尺寸的準(zhǔn)確率對比
模型 (k1, k2) (d1, d2) Top-1(%) 參數(shù)量(M) 時間(s) 模型 (k1, k2) (d1, d2) Top-1(%) 參數(shù)量(M) 時間(s) 1 (1, 3) (1, 1) 94.58 1.77 277 9 (3, 11) (1, 1) 94.87 1.93 277 2 (1, 5) (1, 1) 94.79 1.80 275 10 (5, 7) (1, 1) 94.69 1.90 278 3 (1, 7) (1, 1) 94.92 1.83 277 11 (5, 9) (1, 1) 94.82 1.93 277 4 (1, 9) (1, 1) 94.78 1.87 282 12 (5, 11) (1, 1) 94.89 1.97 277 5 (1, 11) (1, 1) 95.04 1.90 276 13 (7, 9) (1, 1) 94.84 1.97 270 6 (3, 5) (1, 1) 94.92 1.83 287 14 (7, 11) (1, 1) 94.83 2.00 285 7 (3, 7) (1, 1) 94.74 1.87 281 15 (9, 11) (1, 1) 95.03 2.03 272 8 (3, 9) (1, 1) 94.91 1.90 271 下載: 導(dǎo)出CSV
表 3 不同卷積核尺寸和膨脹系數(shù)的準(zhǔn)確率對比
模型 (k1, k2) (d1, d2) Top-1(%) 參數(shù)量(M) 時間(s) 模型 (k1, k2) (d1, d2) Top-1(%) 參數(shù)量(M) 時間(s) 1 (1, 1) (1, 2) 92.95 1.74 747 7 (9, 9) (1, 3) 95.07 2.00 761 2 (3, 3) (1, 2) 94.69 1.80 886 8 (9, 9) (1, 4) 95.09 2.00 1000 3 (5, 5) (1, 2) 94.89 1.87 865 9 (9, 9) (2, 3) 94.98 2.00 1466 4 (7, 7) (1, 2) 94.66 1.93 854 10 (9, 9) (2, 4) 94.96 2.00 1490 5 (9, 9) (1, 2) 95.09 2.00 735 11 (9, 9) (3, 4) 94.85 2.00 1479 6 (11, 11) (1, 2) 94.97 2.06 767 下載: 導(dǎo)出CSV
表 4 不同結(jié)構(gòu)的模型準(zhǔn)確率對比
模型 參數(shù)量(M) Top-1(%) SMS-GCN 2.00 95.09 SMS-GCN(無SMS-TC) 3.76 94.46 SMS-GCN(無S) 1.96 94.97 SMS-GCN(無GMP) 2.00 94.90 SMS-GCN(無GAP) 2.00 94.93 下載: 導(dǎo)出CSV
表 5 添加不同模塊的模型準(zhǔn)確率對比(%)
模型 關(guān)節(jié)流 骨骼流 雙流 CTR-GCN 94.74 94.70 96.07 CTR-GCN + MS-GC 94.91 94.90 96.30 CTR-GCN + SMS-TC 94.86 94.90 96.29 SMS-GCN 95.09 94.96 96.52 下載: 導(dǎo)出CSV
-
[1] IODICE F, DE MOMI E, and AJOUDANI A. HRI30: An action recognition dataset for industrial human-robot interaction[C]. Proceedings of the 26th International Conference on Pattern Recognition, Montreal, Canada, 2022: 4941–4947. doi: 10.1109/ICPR56361.2022.9956300. [2] SARDARI S, SHARIFZADEH S, DANESHKHAH A, et al. Artificial intelligence for skeleton-based physical rehabilitation action evaluation: A systematic review[J]. Computers in Biology and Medicine, 2023, 158: 106835. doi: 10.1016/j.compbiomed.2023.106835. [3] SUN Zehua, KE Qiuhong, RAHMANI H, et al. Human action recognition from various data modalities: A review[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(3): 3200–3225. doi: 10.1109/TPAMI.2022.3183112. [4] 曹毅, 吳偉官, 張小勇, 等. 基于自校準(zhǔn)機(jī)制的時空采樣圖卷積行為識別模型[J]. 工程科學(xué)學(xué)報, 2024, 46(3): 480–490. doi: 10.13374/j.issn2095-9389.2022.12.25.002.CAO Yi, WU Weiguan, ZHANG Xiaoyong, et al. Action recognition model based on the spatiotemporal sampling graph convolutional network and self-calibration mechanism[J]. Chinese Journal of Engineering, 2024, 46(3): 480–490. doi: 10.13374/j.issn2095-9389.2022.12.25.002. [5] SHEN Xiangpei and DING Yanrui. Human skeleton representation for 3D action recognition based on complex network coding and LSTM[J]. Journal of Visual Communication and Image Representation, 2022, 82: 103386. doi: 10.1016/j.jvcir.2021.103386. [6] LIU Jun, WANG Gang, HU Ping, et al. Global context-aware attention LSTM networks for 3D action recognition[C]. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, 2017: 3671–3680. doi: 10.1109/CVPR.2017.391. [7] XIA Rongjie, LI Yanshan, and LUO Wenhan. LAGA-Net: Local-and-global attention network for skeleton based action recognition[J]. IEEE Transactions on Multimedia, 2022, 24: 2648–2661. doi: 10.1109/TMM.2021.3086758. [8] ZHANG Pengfei, LAN Cuiling, ZENG Wenjun, et al. Semantics-guided neural networks for efficient skeleton-based human action recognition[C]. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 1109–1118. doi: 10.1109/CVPR42600.2020.00119. [9] SHI Lei, ZHANG Yifan, CHENG Jian, et al. Two-stream adaptive graph convolutional networks for skeleton-based action recognition[C]. Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 2019: 12018–12027. doi: 10.1109/CVPR.2019.01230. [10] CHEN Yuxin, ZHANG Ziqi, YUAN Chunfeng, et al. Channel-wise topology refinement graph convolution for skeleton-based action recognition[C]. Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, Canada, 2021: 13339–13348. doi: 10.1109/ICCV48922.2021.01311. [11] 曹毅, 吳偉官, 李平, 等. 基于時空特征增強(qiáng)圖卷積網(wǎng)絡(luò)的骨架行為識別[J]. 電子與信息學(xué)報, 2023, 45(8): 3022–3031. doi: 10.11999/JEIT220749.CAO Yi, WU Weiguan, LI Ping, et al. Skeleton action recognition based on spatio-temporal feature enhanced graph convolutional network[J]. Journal of Electronics & Information Technology, 2023, 45(8): 3022–3031. doi: 10.11999/JEIT220749. [12] ZHU Yisheng, SHUAI Hui, LIU Guangcan, et al. Multilevel spatial-temporal excited graph network for skeleton-based action recognition[J]. IEEE Transactions on Image Processing, 2023, 32: 496–508. doi: 10.1109/TIP.2022.3230249. [13] ZHOU Huanyu, LIU Qingjie, and WANG Yunhong. Learning discriminative representations for skeleton based action recognition[C]. Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, Canada, 2023: 10608–10617. doi: 10.1109/CVPR52729.2023.01022. [14] WANG Kaixuan, DENG Hongmin, and ZHU Qilin. Lightweight channel-topology based adaptive graph convolutional network for skeleton-based action recognition[J]. Neurocomputing, 2023, 560: 126830. doi: 10.1016/j.neucom.2023.126830. [15] YAN Sijie, XIONG Yuanjun, and LIN Dahua. Spatial temporal graph convolutional networks for skeleton-based action recognition[C]. Proceedings of the 32nd AAAI Conference on Artificial Intelligence, New Orleans, USA, 2018: 7444–7452. doi: 10.1609/aaai.v32i1.12328. [16] GEDAMU K, JI Yanli, GAO Lingling, et al. Relation-mining self-attention network for skeleton-based human action recognition[J]. Pattern Recognition, 2023, 139: 109455. doi: 10.1016/j.patcog.2023.109455. [17] LIU Ziyu, ZHANG Hongwen, CHEN Zhenghao, et al. Disentangling and unifying graph convolutions for skeleton-based action recognition[C]. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 140–149. doi: 10.1109/CVPR42600.2020.00022. [18] LI Yuxuan, HOU Qibin, ZHENG Zhaohui, et al. Large selective kernel network for remote sensing object detection[C]. Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2023: 16748–16759. doi: 10.1109/ICCV51070.2023.01540. [19] XIA Yu, GAO Qingyuan, WU Weiguan, et al. Skeleton-based action recognition based on multidimensional adaptive dynamic temporal graph convolutional network[J]. Engineering Applications of Artificial Intelligence, 2024, 127: 107210. doi: 10.1016/j.engappai.2023.107210. [20] AMIR S, LIU Jun, NG T T, et al. NTU RGB+D: A large scale dataset for 3D human activity analysis[C]. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA: IEEE, 2016: 1010–1019. doi: 10.1109/CVPR.2016.115. [21] LIU Jun, SHAHROUDY A, PEREZ M, et al. NTU RGB+D 120: A large-scale benchmark for 3D human activity understanding[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(10): 2684–2701. doi: 10.1109/TPAMI.2019.2916873. [22] PAN Qingzhe, ZHAO Zhifu, XIE Xuemei, et al. View-normalized and subject-independent skeleton generation for action recognition[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(12): 7398–7412. doi: 10.1109/TCSVT.2022.3219864. [23] 曹毅, 劉晨, 盛永健, 等. 基于三維圖卷積與注意力增強(qiáng)的行為識別模型[J]. 電子與信息學(xué)報, 2021, 43(7): 2071–2078. doi: 10.11999/JEIT200448.CAO Yi, LIU Chen, SHENG Yongjian, et al. Action recognition model based on 3D graph convolution and attention enhanced[J]. Journal of Electronics & Information Technology, 2021, 43(7): 2071–2078. doi: 10.11999/JEIT200448. -