You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We evaluate accuracy of pose estimation models developed by hyperpose (mainly over Mscoco2017 dataset). the development environment is Ubuntu16.04, with 4 V100-DGXs and 24 Intel Xeon CPU. The training procedure takes 1~2 weeks using 1 V100-DGX for each model. (If you want to train from strach, loading the pretrained backbone weight is recommended.)
Copy file name to clipboardExpand all lines: docs/markdown/install/prediction.md
+9-4Lines changed: 9 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,11 @@
1
1
# C++ Prediction Library Installation
2
2
3
-
## Docker Environment Installation
3
+
Note that C++ prediction library requires NVidia GPU acceleration.
4
+
5
+
Thought it is built to be platform-independent, the C++ library is mostly tested on Linux Platforms.
6
+
So we recommend you to build it on Linux platforms.
7
+
8
+
## Docker Environment Installation (RECOMMENDED)
4
9
5
10
To ease the installation, you can use HyperPose library in our docker image where the environment is pre-installed.
6
11
@@ -15,7 +20,7 @@ The official image is on [DockerHub](https://hub.docker.com/r/tensorlayer/hyperp
15
20
docker pull tensorlayer/hyperpose
16
21
17
22
# Dive into the image. (Connect local camera and imshow window)
18
-
xhost +; docker run --gpus all -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --device=/dev/video0:/dev/video0 --entrypoint /bin/bash tensorlayer/hyperpose
23
+
xhost +; docker run --rm --gpus all -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --device=/dev/video0:/dev/video0 --entrypoint /bin/bash tensorlayer/hyperpose
19
24
# For users that cannot access a camera or X11 server. You may also use:
20
25
# docker run --rm --gpus all -it --entrypoint /bin/bash tensorlayer/hyperpose
21
26
```
@@ -28,7 +33,7 @@ Note that the entry point is the [`hyperpose-cli`](https://hyperpose.readthedocs
28
33
# Enter the repository folder.
29
34
USER_DEF_NAME=my_hyperpose
30
35
docker build -t $(USER_DEF_NAME).
31
-
docker run --gpus all $(USER_DEF_NAME)
36
+
docker run --rm --gpus all $(USER_DEF_NAME)
32
37
```
33
38
34
39
## Build From Source
@@ -46,7 +51,7 @@ docker run --gpus all $(USER_DEF_NAME)
46
51
47
52
> **About TensorRT installation**
48
53
>
49
-
> - For Linux users, you highly recommended to install it in a system-wide setting. You can install TensorRT7 via the [debian distributions](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-debian) or [NVIDIA network repo](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#maclearn-net-repo-install)(CUDA and CuDNN dependency will be automatically installed).
54
+
> - For Linux users, you are highly recommended to install it in a system-wide setting. You can install TensorRT7 via the [debian distributions](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-debian) or [NVIDIA network repo](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#maclearn-net-repo-install)(CUDA and CuDNN dependency will be automatically installed).
50
55
> - Different TensorRT version requires specific CUDA and CuDNN version. For specific CUDA and CuDNN requirements of TensorRT7, please refer to [this](https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#platform-matrix).
51
56
> - Also, for Ubuntu 18.04 users, this [3rd party blog](https://ddkang.github.io/2020/01/02/installing-tensorrt.html) may help you.
Copy file name to clipboardExpand all lines: docs/markdown/quick_start/prediction.md
+10-7Lines changed: 10 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,10 +27,14 @@ sh scripts/download-test-data.sh
27
27
# cd to the git repo. And download pre-trained models you want.
28
28
29
29
sh scripts/download-openpose-thin-model.sh # ~20 MB
30
-
sh scripts/download-tinyvgg-model.sh # ~30 MB
30
+
sh scripts/download-tinyvgg-model.sh # ~30 MB (UFF model)
31
31
sh scripts/download-openpose-res50-model.sh # ~45 MB
32
32
sh scripts/download-openpose-coco-model.sh # ~200 MB
33
-
sh scripts/download-ppn-res50-model.sh # ~50 MB (PoseProposal Algorithm)
33
+
sh scripts/download-openpose-mobile-model.sh
34
+
sh scripts/download-tinyvgg-v2-model.sh
35
+
sh scripts/download-openpose-mobile-model.sh
36
+
sh scripts/download-openpifpaf-model.sh # ~98 MB (OpenPifPaf)
37
+
sh scripts/download-ppn-res50-model.sh # ~50 MB (PoseProposal)
34
38
```
35
39
36
40
> You can download them manually to `${HyperPose}/data/models/` via [LINK](https://drive.google.com/drive/folders/1w9EjMkrjxOmMw3Rf6fXXkiv_ge7M99jR?usp=sharing)**if the network is not working**.
@@ -77,20 +81,20 @@ Note that the entry point of our official docker image is also `hyperpose-cli` i
### Convert models into TensorRT Engine Protobuf format
90
94
91
95
You may find that it takes one or two minutes before the real prediction starts. This is because TensorRT will try to profile the model to get a optimized runtime model.
92
96
93
-
To save the model conversion time, you can convert it in advance.
97
+
To save the model conversion time, you can pre-compile it in advance.
@@ -124,4 +128,3 @@ The output video will be in the building folder.
124
128
./hyperpose-cli --source=camera
125
129
# Note that camera mode is not compatible with Stream API. If you want to do inference on your camera in real time, the Operator API is designed for it.
0 commit comments