Skip to content

Commit 67bfd49

Browse files
authored
Docker enhancement and update doc (#359)
* feat: enable pifpaf inference support * refact: rm comments * feat: openpifpaf decoder finalized * fix: ci * fix: license badge * fix: gcc7 compilation * fix: conflicts w/ master branch ++ * feat: pifpaf with larger resolution * feat: update model installation * refact: update C++ doc * refact: update dockerfile and doc
1 parent 50e9761 commit 67bfd49

16 files changed

+102
-37
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
pre_installed_models
12
__pycache__
23
.DS_Store
34
.idea

Dockerfile

Lines changed: 16 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,11 @@
55
# Based on CUDA11.0 & CuDNN8
66
FROM nvidia/cuda:10.2-devel-ubuntu18.04
77

8+
# Test connection
9+
RUN apt update --allow-unauthenticated && apt install -y wget && wget www.google.com
10+
811
# Install Non-GPU Dependencies.
9-
RUN apt update --allow-unauthenticated && version="7.0.0-1+cuda10.2" ; \
12+
RUN version="7.0.0-1+cuda10.2" ; \
1013
apt install -y \
1114
libnvinfer7=${version} libnvonnxparsers7=${version} libnvparsers7=${version} \
1215
libnvinfer-plugin7=${version} libnvinfer-dev=${version} libnvonnxparsers-dev=${version} \
@@ -15,6 +18,9 @@ RUN apt update --allow-unauthenticated && version="7.0.0-1+cuda10.2" ; \
1518
apt-mark hold \
1619
libnvinfer7 libnvonnxparsers7 libnvparsers7 libnvinfer-plugin7 libnvinfer-dev libnvonnxparsers-dev libnvparsers-dev libnvinfer-plugin-dev python-libnvinfer python3-libnvinfer
1720

21+
# Set apt-get to automatically retry if a package download fails
22+
RUN echo 'Acquire::Retries "5";' > /etc/apt/apt.conf.d/99AcquireRetries
23+
1824
# Install OpenCV Dependencies
1925
RUN apt install -y software-properties-common || apt install -y software-properties-common && \
2026
add-apt-repository "deb http://security.ubuntu.com/ubuntu xenial-security main" && \
@@ -23,8 +29,8 @@ RUN apt install -y software-properties-common || apt install -y software-propert
2329
python3 -m pip install numpy
2430

2531
# Compile OpenCV
26-
RUN git clone --branch 4.4.0 https://github.com/opencv/opencv.git && \
27-
cd opencv && mkdir build && cd build && \
32+
RUN apt install -y zip && wget https://github.com/opencv/opencv/archive/refs/tags/4.4.0.zip && unzip 4.4.0.zip && \
33+
cd opencv-4.4.0 && mkdir build && cd build && \
2834
cmake .. -DCMAKE_BUILD_TYPE=Release \
2935
-DCMAKE_INSTALL_PREFIX=/usr/local \
3036
-DWITH_TBB=ON \
@@ -38,7 +44,12 @@ RUN apt install -y python3-dev python3-pip subversion libgflags-dev
3844

3945
COPY . /hyperpose
4046

41-
# Download related data
47+
# Get models
48+
# NOTE: if you cannot install the models due to network issues:
49+
# 1 Manually install ONNX and UFF models through: https://drive.google.com/drive/folders/1w9EjMkrjxOmMw3Rf6fXXkiv_ge7M99jR
50+
# 2 Put all models into `${GIT_DIR}/pre_installed_models`
51+
# 3.1 `RUN /hyperpose/scripts/download-test-data.sh`
52+
# 3.2 `RUN mv /hyperpose/pre_installed_models /hyperpose/data/models`
4253
RUN for file in $(find /hyperpose/scripts -type f -iname 'download*.sh'); do sh $file; done
4354

4455
# Build Repo
@@ -47,4 +58,4 @@ RUN cd hyperpose && mkdir build && cd build && \
4758

4859
WORKDIR /hyperpose/build
4960

50-
ENTRYPOINT ["./hyperpose-cli"]
61+
ENTRYPOINT ["./hyperpose-cli"]

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ Make sure you have [docker](https://docs.docker.com/get-docker/) with [nvidia-do
5050
5151
```bash
5252
# [Example 1]: Doing inference on given video, copy the output.avi to the local path.
53-
docker run --name quick-start --gpus all tensorlayer/hyperpose --runtime=stream
53+
docker run --rm --name quick-start --gpus all tensorlayer/hyperpose --runtime=stream
5454
docker cp quick-start:/hyperpose/build/output.avi .
5555
docker rm quick-start
5656

@@ -84,7 +84,7 @@ We compare the prediction performance of HyperPose with [OpenPose 1.6](https://g
8484
| OpenPose (TinyVGG) | 34.7 MB | 384 x 256 | **124.925 FPS** | N/A |
8585
| OpenPose (MobileNet) | 17.9 MB | 432 x 368 | **84.32 FPS** | 8.5 FPS (TF-Pose) |
8686
| OpenPose (ResNet18) | 45.0 MB | 432 x 368 | **62.52 FPS** | N/A |
87-
| OpenPifPaf (ResNet50) | 97.6 MB | 97 x 129 | **178.6 FPS** | 35.3 |
87+
| OpenPifPaf (ResNet50) | 97.6 MB | 432 x 368 | **44.16 FPS** | 14.5 FPS (OpenPifPaf) |
8888

8989
## Accuracy
9090
We evaluate accuracy of pose estimation models developed by hyperpose (mainly over Mscoco2017 dataset). the development environment is Ubuntu16.04, with 4 V100-DGXs and 24 Intel Xeon CPU. The training procedure takes 1~2 weeks using 1 V100-DGX for each model. (If you want to train from strach, loading the pretrained backbone weight is recommended.)

docs/markdown/install/prediction.md

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,11 @@
11
# C++ Prediction Library Installation
22

3-
## Docker Environment Installation
3+
Note that C++ prediction library requires NVidia GPU acceleration.
4+
5+
Thought it is built to be platform-independent, the C++ library is mostly tested on Linux Platforms.
6+
So we recommend you to build it on Linux platforms.
7+
8+
## Docker Environment Installation (RECOMMENDED)
49

510
To ease the installation, you can use HyperPose library in our docker image where the environment is pre-installed.
611

@@ -15,7 +20,7 @@ The official image is on [DockerHub](https://hub.docker.com/r/tensorlayer/hyperp
1520
docker pull tensorlayer/hyperpose
1621

1722
# Dive into the image. (Connect local camera and imshow window)
18-
xhost +; docker run --gpus all -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --device=/dev/video0:/dev/video0 --entrypoint /bin/bash tensorlayer/hyperpose
23+
xhost +; docker run --rm --gpus all -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --device=/dev/video0:/dev/video0 --entrypoint /bin/bash tensorlayer/hyperpose
1924
# For users that cannot access a camera or X11 server. You may also use:
2025
# docker run --rm --gpus all -it --entrypoint /bin/bash tensorlayer/hyperpose
2126
```
@@ -28,7 +33,7 @@ Note that the entry point is the [`hyperpose-cli`](https://hyperpose.readthedocs
2833
# Enter the repository folder.
2934
USER_DEF_NAME=my_hyperpose
3035
docker build -t $(USER_DEF_NAME) .
31-
docker run --gpus all $(USER_DEF_NAME)
36+
docker run --rm --gpus all $(USER_DEF_NAME)
3237
```
3338

3439
## Build From Source
@@ -46,7 +51,7 @@ docker run --gpus all $(USER_DEF_NAME)
4651
4752
> **About TensorRT installation**
4853
>
49-
> - For Linux users, you highly recommended to install it in a system-wide setting. You can install TensorRT7 via the [debian distributions](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-debian) or [NVIDIA network repo](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#maclearn-net-repo-install)(CUDA and CuDNN dependency will be automatically installed).
54+
> - For Linux users, you are highly recommended to install it in a system-wide setting. You can install TensorRT7 via the [debian distributions](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-debian) or [NVIDIA network repo](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#maclearn-net-repo-install)(CUDA and CuDNN dependency will be automatically installed).
5055
> - Different TensorRT version requires specific CUDA and CuDNN version. For specific CUDA and CuDNN requirements of TensorRT7, please refer to [this](https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#platform-matrix).
5156
> - Also, for Ubuntu 18.04 users, this [3rd party blog](https://ddkang.github.io/2020/01/02/installing-tensorrt.html) may help you.
5257

docs/markdown/quick_start/prediction.md

Lines changed: 10 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -27,10 +27,14 @@ sh scripts/download-test-data.sh
2727
# cd to the git repo. And download pre-trained models you want.
2828

2929
sh scripts/download-openpose-thin-model.sh # ~20 MB
30-
sh scripts/download-tinyvgg-model.sh # ~30 MB
30+
sh scripts/download-tinyvgg-model.sh # ~30 MB (UFF model)
3131
sh scripts/download-openpose-res50-model.sh # ~45 MB
3232
sh scripts/download-openpose-coco-model.sh # ~200 MB
33-
sh scripts/download-ppn-res50-model.sh # ~50 MB (PoseProposal Algorithm)
33+
sh scripts/download-openpose-mobile-model.sh
34+
sh scripts/download-tinyvgg-v2-model.sh
35+
sh scripts/download-openpose-mobile-model.sh
36+
sh scripts/download-openpifpaf-model.sh # ~98 MB (OpenPifPaf)
37+
sh scripts/download-ppn-res50-model.sh # ~50 MB (PoseProposal)
3438
```
3539

3640
> You can download them manually to `${HyperPose}/data/models/` via [LINK](https://drive.google.com/drive/folders/1w9EjMkrjxOmMw3Rf6fXXkiv_ge7M99jR?usp=sharing) **if the network is not working**.
@@ -77,20 +81,20 @@ Note that the entry point of our official docker image is also `hyperpose-cli` i
7781
```bash
7882
./hyperpose-cli --model ../data/models/openpose-thin-V2-HW=368x432.onnx --w 432 --h 368
7983

80-
./hyperpose-cli --model ../data/models/openpose-coco-V2-HW=368x656.onnx --w 656 --h 368
84+
./hyperpose-cli --model ../data/models/openpose-coco-V2-HW=368x656.onnx --w 656 --h 368
8185
```
8286

83-
### Use PoseProposal model
87+
### Use PifPaf model
8488

8589
```bash
86-
./hyperpose-cli --model ../data/models/ppn-resnet50-V2-HW=384x384.onnx --w 384 --h 384 --post=ppn
90+
./hyperpose-cli --model ../data/models/openpifpaf-resnet50-HW=368x432.onnx --w 368 --h 432 --post pifpaf
8791
```
8892

8993
### Convert models into TensorRT Engine Protobuf format
9094

9195
You may find that it takes one or two minutes before the real prediction starts. This is because TensorRT will try to profile the model to get a optimized runtime model.
9296

93-
To save the model conversion time, you can convert it in advance.
97+
To save the model conversion time, you can pre-compile it in advance.
9498

9599
```bash
96100
./example.gen_serialized_engine --model_file ../data/models/openpose-coco-V2-HW=368x656.onnx --input_width 656 --input_height 368 --max_batch_size 20
@@ -124,4 +128,3 @@ The output video will be in the building folder.
124128
./hyperpose-cli --source=camera
125129
# Note that camera mode is not compatible with Stream API. If you want to do inference on your camera in real time, the Operator API is designed for it.
126130
```
127-

docs/markdown/tutorial/prediction.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ using namespace hyperpose;
5757

5858
// To use a Uff model, users needs to specify the input/output nodes.
5959
// Here, `image` is the input node name, and `outputs/conf` and `outputs/paf` are the output feature maps. (related to the PAF algorithm)
60-
const dnn::uff uff_model{ "../data/models/hao28-600000-256x384.uff", "image", {"outputs/conf", "outputs/paf"} };
60+
const dnn::uff uff_model{ "../data/models/TinyVGG-V1-HW=256x384.uff", "image", {"outputs/conf", "outputs/paf"} };
6161
```
6262
6363
### Create Input / Output Stream
@@ -208,4 +208,4 @@ for (size_t i = 0; i < batch.size(); ++i) {
208208

209209
### Full example
210210

211-
Full examples are available [here](../design/design.md).
211+
Full examples are available [here](../design/design.md).
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
#!/bin/sh
2+
3+
set -e
4+
5+
[ "$(command -v gdown)" ] || (echo "Downloading gdown via PIP" && python3 -m pip install gdown -U)
6+
7+
model_name="openpifpaf-resnet50-HW=368x432.onnx"
8+
9+
BASEDIR=$(realpath "$(dirname "$0")")
10+
cd "$BASEDIR"
11+
mkdir -p ../data/models
12+
cd ../data/models
13+
14+
python3 "$BASEDIR/downloader.py" --model $model_name

scripts/download-openpose-coco-model.sh

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,12 @@
22

33
set -e
44

5-
[ $(which gdown) ] || (echo "Downloading gdown via PIP" && python3 -m pip install gdown -U)
5+
[ "$(command -v gdown)" ] || (echo "Downloading gdown via PIP" && python3 -m pip install gdown -U)
66

77
model_name="openpose-coco-V2-HW=368x656.onnx"
88

9-
BASEDIR=$(realpath "$(dirname $0)")
10-
cd $BASEDIR
9+
BASEDIR=$(realpath "$(dirname "$0")")
10+
cd "$BASEDIR"
1111
mkdir -p ../data/models
1212
cd ../data/models
1313

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
#!/bin/sh
2+
3+
set -e
4+
5+
[ "$(command -v gdown)" ] || (echo "Downloading gdown via PIP" && python3 -m pip install gdown -U)
6+
7+
model_name="openpose-mobile-HW=342x368.onnx"
8+
9+
BASEDIR=$(realpath "$(dirname "$0")")
10+
cd "$BASEDIR"
11+
mkdir -p ../data/models
12+
cd ../data/models
13+
14+
python3 "$BASEDIR/downloader.py" --model $model_name

scripts/download-openpose-res50-model.sh

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,12 @@
22

33
set -e
44

5-
[ $(which gdown) ] || (echo "Downloading gdown via PIP" && python3 -m pip install gdown -U)
5+
[ "$(command -v gdown)" ] || (echo "Downloading gdown via PIP" && python3 -m pip install gdown -U)
66

77
model_name="lopps-resnet50-V2-HW=368x432.onnx"
88

9-
BASEDIR=$(realpath "$(dirname $0)")
10-
cd $BASEDIR
9+
BASEDIR=$(realpath "$(dirname "$0")")
10+
cd "$BASEDIR"
1111
mkdir -p ../data/models
1212
cd ../data/models
1313

0 commit comments

Comments
 (0)