Skip to content

Commit 7129943

Browse files
authored
[WIP] Update README (#363)
* update README * udpate
1 parent 5941eba commit 7129943

File tree

1 file changed

+37
-16
lines changed

1 file changed

+37
-16
lines changed

README.md

Lines changed: 37 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -24,33 +24,46 @@
2424
<a href="#License">License</a>
2525
</p>
2626

27-
HyperPose is a library for building human pose estimation systems that can efficiently operate in the wild.
27+
# HyperPose
28+
29+
HyperPose is a library for building high-performance custom pose estimation systems.
2830

2931
## Features
3032

31-
HyperPose has two key features, which are not available in existing libraries:
33+
HyperPose has two key features:
3234

33-
- **Flexible training platform**: HyperPose provides flexible Python APIs to provide a customise pipeline for developing various pose estimation models. HyperPose users can:
34-
* make use of uniform pipelines for train,evaluation,visualization,pre-processing and post-processing across various models (e.g., OpenPose,Pifpaf,PoseProposal Network)
35-
* customise model and dataset for their own use(e.g. user-defined model,user-defined dataset,mitiple dataset combination)
36-
* parallel training using multiple GPUs(using *Kungfu* adaptive distribute training library)
35+
- **High-performance pose estimation wth CPUs/GPUs**: HyperPose achieves real-time pose estimation though a high-performance pose estimation engine. This engine implements numerous system optimizations: pipeline parallelism, model inference with TensorRT, CPU/GPU hybrid scheduling, and many others. This allows HyperPose to run 4x FASTER than OpenPose and 10x FASTER than TF-Pose.
36+
- **Flexibility for developing custom pose estimation models**: HyperPose provides flexible Python APIs to provide a customise pipeline for developing various pose estimation models. HyperPose users can:
37+
* Make use of uniform pipelines for train,evaluation,visualization,pre-processing and post-processing across various models (e.g., OpenPose,Pifpaf,PoseProposal Network)
38+
* Customise model and dataset for their own use(e.g. user-defined model,user-defined dataset,mitiple dataset combination)
39+
* Parallel training using multiple GPUs(using *Kungfu* adaptive distribute training library)
3740
thus building models specific to their real-world scenarios.
38-
- **High-performance pose estimation**: HyperPose achieves real-time pose estimation though a high-performance pose estimation engine. This engine implements numerous system optimizations: pipeline parallelism, model inference with TensorRT, CPU/GPU hybrid scheduling, and many others. This allows HyperPose to **run 4x FASTER than OpenPose and 10x FASTER than TF-Pose**.
3941

40-
## Documentation
42+
## Quick Start
43+
44+
The HyperPose library contains two parts:
45+
* A C++ library for high-performance pose estimation model inference.
46+
* A Python library for developing custom pose estimation models (e.g., OpenPose, PifPaf, PoseProposal).
4147

42-
You can install HyperPose(Python Training Library, C++ inference Library) and learn its APIs through [HyperPose Documentation](https://hyperpose.readthedocs.io/en/latest/).
48+
### C++ inference library
4349

44-
## Quick-Start with Docker
50+
The best way to try the inference library is using a [Docker image](https://hub.docker.com/r/tensorlayer/hyperpose). Pre-requisites for running this images are:
4551

46-
The official docker image is on [DockerHub](https://hub.docker.com/r/tensorlayer/hyperpose).
52+
* [CUDA Driver](https://www.tensorflow.org/install/gpu) (> 10.2)
53+
* [NVIDIA docker](https://github.com/NVIDIA/nvidia-docker) (> TODO)
54+
* TODO: anything else?
4755

48-
Make sure you have [docker](https://docs.docker.com/get-docker/) with [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) functionality installed.
56+
Once pre-requisites are ready, you can pull
57+
the HyperPose docker as follows:
58+
59+
```bash
60+
docker pull tensorlayer/hyperpose
61+
```
4962

50-
> Also note that your nvidia driver should be [compatible](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#support-title) with CUDA10.2.
63+
We provide 4 examples to run with this image (The following commands have been tested with Ubuntu 18.04):
5164

5265
```bash
53-
# [Example 1]: Doing inference on given video, copy the output.avi to the local path.
66+
# [Example 1]: Doing inference on given video, copy the output.avi to the local path.
5467
docker run --name quick-start --gpus all tensorlayer/hyperpose --runtime=stream
5568
docker cp quick-start:/hyperpose/build/output.avi .
5669
docker rm quick-start
@@ -73,11 +86,19 @@ xhost +; docker run --rm --gpus all -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/t
7386
# docker run --rm --gpus all -it --entrypoint /bin/bash tensorlayer/hyperpose
7487
```
7588

76-
> For more details, please check [here](https://hyperpose.readthedocs.io/en/latest/markdown/quick_start/prediction.html#table-of-flags-for-hyperpose-cli).
89+
More information of the HyperPose Docker image can be found [here](https://hyperpose.readthedocs.io/en/latest/markdown/quick_start/prediction.html#table-of-flags-for-hyperpose-cli).
90+
91+
### Python training library
92+
93+
To install the Python training library, you can follow the steps [here](https://hyperpose.readthedocs.io/en/latest/markdown/install/training.html).
94+
95+
## Documentation
96+
97+
The APIs of the HyperPose training library and the inference library are both described in [Documentation](https://hyperpose.readthedocs.io/en/latest/).
7798

7899
## Performance
79100

80-
We compare the prediction performance of HyperPose with [OpenPose 1.6](https://github.com/CMU-Perceptual-Computing-Lab/openpose) and [TF-Pose](https://github.com/ildoonet/tf-pose-estimation). We implement the OpenPose algorithms with different configurations in HyperPose. The test-bed has Ubuntu18.04, 1070Ti GPU, Intel i7 CPU (12 logic cores).
101+
We compare the prediction performance of HyperPose with [OpenPose 1.6](https://github.com/CMU-Perceptual-Computing-Lab/openpose) and [TF-Pose](https://github.com/ildoonet/tf-pose-estimation). We implement the OpenPose algorithms with different configurations in HyperPose. The test-bed has Ubuntu18.04, 1070Ti GPU, Intel i7 CPU (12 logic cores).
81102

82103
| HyperPose Configuration | DNN Size | Input Size | HyperPose | Baseline |
83104
| --------------- | ------------- | ------------------ | ------------------ | --------------------- |

0 commit comments

Comments
 (0)