Use with: mocap_importer
Wrapping code repositories of various motion capture papers & researches, to provide a unified interface through CLI. Simplify their installation and running.
Only tested on Linux. Not stable yet.
sincerelly thanks to gvhmr/wilor/wilor-mini developers and others that help each other
Caution
TODO in v0.3:
- support Dyn-HaMR, gvhmr-realtime
- PR udocker for RESTful over Socket/TCP in podman standard
- support MCP/fastAPI
- wilor continuous predict.
-
PR cockpit compatible with webtui+ chawan(PR websocket) in TUI -
cockpit progress addon(app) with progress, task-spooler...-
progress and dbus: org.kde.JobTracker (https://invent.kde.org/frameworks/kjobwidgets/-/merge_requests/6) -
new (once) task, pause task if pauseable, stop task, task priority(cpu/gpu hang, network traffic speed limit)
-
| Feature 功能 | |
|---|---|
| 🖥Models | GVHMR, WiLoR |
| 🐧Linux | 🚧 Implementing |
| 🪟Windows | ❓ Need test |
| 🍎OSX | ❓ |
| 📔Jupyter Notebook | ❓ |
| 🤖MCP | 🚧 |
| 🚀国内镜像加速 | ✅ |
Rank: body🕺 hand👋 face👤 text to motion文
| model | paper | commit | issue | comment |
|---|---|---|---|---|
| ✅🕺GVHMR |
2024 SIGGRAPH Asia, VRAM > 3GB | |||
| 🚧🕺TRAM |
2024, suit for fast-motion, but VRAM > 6GB | |||
| 🕒🕺CoMotion |
2025, belongs to Apple | |||
| 🕒🕺SAT-HMR |
2025 | |||
| ✅👋WiLoR |
2024, use mini, fast, VRAM > 2.5GB | |||
| 🚧👋Dyn-HaMR |
2025 CVPR Highlight | |||
| 🕒👋HaWoR |
2025 CVPR Highlight, for AR/VR headset | |||
| 🕒👋Hamba |
2025 NeurIPS | |||
| 🕒👋OmniHands |
2024 | |||
| 🕒👋HaMeR |
2023 | |||
| 🕒👋HOISDF |
2024, better on occulusion | |||
| 🕒👤SPIGA |
2022 | |||
| 🕒👤mediapipe |
realtime | |||
| 🕒文🎵 MotionAnything |
2025, waiting code release | |||
| 🕒文 momask-codes |
2024 |
- hand: no constant tracking for video(just no yolo, ready for photo but not video)
- 🕺👋👤-文🎵 Genmo (Nvidia Lab)
- 🕺👋👤Look Ma, no markers: holistic performance capture without the hassle
- 👤D-ViT
| Solution | Comment | |
|---|---|---|
| 👤face | 🍎iFacialMocap (iPhone X + PC(win/Mac)) 🤖Meowface (free, Android, can work with iFacialMocap PC client) |
Shape key |
| 🍎+💻Unreal Live Link | Bone | |
| hand/body | VR headset or VR trackers |
- 🍎
iphone≥X(12/13 best)for better face mocap result on UE live link, though you can use android🤖 to do live link.
The scripts will smartly skip or update pixi,uv,mocap-wrapper,7z,aria2c,ffmpeg,git if they're installed or in system $PATH.
# sudo -i; bash <(curl -sSL https://gitee.com/SuperManito/LinuxMirrors/raw/main/ChangeMirrors.sh) # 一键设置linux镜像(可选)
curl https://raw.githubusercontent.com/AClon314/mocap-wrapper/refs/heads/master/src/mocap_wrapper/install/pixi.py | python -- -y
mocap --install -b gvhmr,wilorThe python scripts are equivalent to the following:
#!/bin/bash -eou pipefail
# 1. pixi.py
curl -fsSL https://pixi.sh/install.sh | sh
pixi global install uv
uv python install
~/.pixi/bin/uv pip install git+https://github.com/AClon314/mocap-wrapper
# 2. mocap --install -b ''
sudo apt install 7z aria2 ffmpeg git # pixi global install 7z aria2 ffmpeg git
git clone https://github.com/zju3dv/GVHMR
aria2c hmr4d.ckpt # download pre-trained
# 3. mocap-wrapper in uv; gvhmr/wilor... in pixi env seperately
. ~/.venv/bin/activate
mocap -i input.mp4 -b gvhmr
cd $SEARCH_DIR/GVHMR
pixi run run/gvhmr.py%%{init:{'flowchart':{'padding':0, 'htmlLabels':false}, 'htmlLabels':false, 'theme':'base', 'themeVariables':{'primaryColor':'#fff','clusterBkg':'#fff','edgeLabelBackground':'#fff','lineColor':'#888','primaryTextColor':'#000','primaryBorderColor':'#000','secondaryTextColor':'#000', 'clusterBorder':'#888','tertiaryTextColor':'#000'} }}%%
graph TD
pkgs["7z,aria2c,ffmpeg, podman/udocker"]
ai["gvhmr,wilor..."]
pixi --global install--> uv
uv --~/.venv--> mocap
mocap --global install--> pkgs
pkgs -.container.-> ai
See mocap -h for more options.
mocap -i input.mp4
mocap -i input.mp4 -b gvhmr,wilor -o outdirA useful data visualize tool to expand .pt/.npy/.npz
You have to read these if you want to modify code.
LOG=debug mocap -I# docker build -t mocap -f docker/Dockerfile .
podman build -t mocap -f docker/Dockerfile . --security-opt label=disable
# github action local
act -j test -v --action-offline-mode --bind --reuse --env LOG=D # --rm=falsekey: Armature mapping from;Algorithm run;who;begin;prop[0];prop[1]...
example:
- smplx;gvhmr;person0;0;body_pose = array([...], dtype=...)
- smplx;wilor;person1;9;hand_pose = ...
- smplx;wilor;person1;1;bbox = ...
ps: the blender addon use Armature mapping to
- pose: thetas, θ
- betas: shape, β
- expression: psi, ψ
- trans(lation) 平移
- global_orient: rotate 旋转
- bbox: yolo_bbox_xyXY
By using this repository, you must also comply with the terms of these external licenses:
| Repo | License |
|---|---|
| GVHMR | Copyright 2022-2023 3D Vision Group at the State Key Lab of CAD&CG, Zhejiang University. All Rights Reserved. ![]() |
| WiLoR | ![]() |
| mocap-wrapper | AGPL v3 |
- 2025.12.08:
1个月停更是去探索CLI/TUI/webUI的跨平台方案了。 摸索出:
- CLI: python argParse+argcomplete 客户端参数转pandatic http请求
- fastAPI 服务器任务调度/网络请求
- podman udocker CLI调用 或
json-RPC通信
- TUI: chawan 网页浏览器(nim语言,导入websocket C库成功,但nim lang server类型提示没弄出来,遂放弃)
- webUI: webTUI css 主题; cockpit webUI 自带简易身份认证; cockpit 任务队列插件; 综上,这是我对此项目最终的愿景,这里就已经涉及到5个大工程了。个人开发精力有限,最后决定先让项目跑起来,以后有时间慢慢迭代这些 重要但不紧急的需求。
- CLI: python argParse+argcomplete 客户端参数转pandatic http请求


