You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+24-9Lines changed: 24 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -64,15 +64,21 @@ The 1.x branch works with **PyTorch 1.6+**.
64
64
65
65
-**Modular design**: We decompose a video understanding framework into different components. One can easily construct a customized video understanding framework by combining different modules.
66
66
67
-
-**Support four major video understanding tasks**: MMAction2 implements various algorithms for multiple video understanding tasks, including action recognition, action localization, spatio-temporal action detection, and skeleton-based action detection. We support **27** different algorithms and **20** different datasets for the four major tasks.
67
+
-**Support four major video understanding tasks**: MMAction2 implements various algorithms for multiple video understanding tasks, including action recognition, action localization, spatio-temporal action detection, and skeleton-based action detection.
68
68
69
69
-**Well tested and documented**: We provide detailed documentation and API reference, as well as unit tests.
70
70
71
71
## What's New
72
72
73
-
- (2022-10-11) We support **Video Swin Transformer** on Kinetics400 and additionally train a Swin-L model on Kinetics700 to extract video features for downstream tasks.
73
+
**Release**: v1.0.0rc2 with the following new features:
74
74
75
-
**Release**: v1.0.0rc1 was released in 14/10/2022. Please refer to [changelog.md](docs/en/notes/changelog.md) for details and release history.
75
+
- We Support Omni-Sourece training on ImageNet and Kinetics datasets.
76
+
- We support exporting spatial-temporal detection models to ONNX.
77
+
- We support **STGCN++** on NTU-RGB+D.
78
+
- We support **MViT V2** on Kinetics 400 and something-V2.
79
+
- We refine our skeleton-based pipelines and support the joint training of multi-stream skeleton information, including **joint, bone, joint-motion, and bone-motion**.
80
+
- We support **VideoMAE** on Kinetics400.
81
+
- We support **C2D** on Kinetics400, achieve 73.57% Top-1 accuracy (higher than 71.8% in the [paper](https://arxiv.org/abs/1711.07971)).
76
82
77
83
## Installation
78
84
@@ -88,17 +94,18 @@ Please refer to [install.md](https://mmaction2.readthedocs.io/en/1.x/get_started
0 commit comments