Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ We provide a sample training script with cli located at [train.py](https://githu

To evaluate a model using hyperpose is similiar to the training procedure, we also provide a sample evaluation script with cli located at [eval.py](https://github.com/tensorlayer/hyperpose/blob/master/eval.py) as an example and a template for modification.

More information of the Hyperpose training library APIs can be found [here](https://hyperpose.readthedocs.io/en/latest/markdown/quick_start/training.html)
More information of the Hyperpose training library usage can be found [here](https://hyperpose.readthedocs.io/en/latest/markdown/quick_start/training.html).


## Documentation
Expand Down
50 changes: 30 additions & 20 deletions docs/markdown/install/training.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,15 +28,19 @@ conda install cudnn=7.6.0
```

After configuring and activating conda enviroment, we can then begin to install the hyperpose.<br>

(I)The first method to install is to put hyperpose python module in the working directory.(recommand)<br>
After git-cloning the source [repository](https://github.com/tensorlayer/hyperpose.git), you can directly import hyperpose python library under the root directory of the cloned repository.<br>
To make importion available, you should install the prerequist dependencies as followed:<br>
you can either install according to the requirements.txt in the [repository](https://github.com/tensorlayer/hyperpose.git)
After git-cloning the source [repository](https://github.com/tensorlayer/hyperpose.git), you can directly import hyperpose python library under the root directory of the cloned repository.<br>

To make importion available, you should install the prerequist dependencies as followed:<br>
you can either install according to the requirements.txt in the [repository](https://github.com/tensorlayer/hyperpose.git)
```bash
# install according to the requirements.txt
pip install -r requirements.txt
```
or install libraries one by one

or install libraries one by one

```bash
# >>> install tensorflow of version 2.3.1
pip install tensorflow-gpu==2.3.1
Expand All @@ -48,13 +52,17 @@ After configuring and activating conda enviroment, we can then begin to install
pip install pycocotools
pip install matplotlib
```
This method of installation use the latest source code and thus is less likely to meet compatibility problems.<br><br>

This method of installation use the latest source code and thus is less likely to meet compatibility problems.<br><br>

(II)The second method to install is to use pypi repositories.<br>
We have already upload hyperpose python library to pypi website so you can install it using pip, which gives you the last stable version.
```bash
pip install hyperpose
```
This will download and install all dependencies automatically.
We have already upload hyperpose python library to pypi website so you can install it using pip, which gives you the last stable version.

```bash
pip install hyperpose
```

This will download and install all dependencies automatically.

Now after installing dependent libraries and hyperpose itself, let's check whether the installation successes.
run following command in bash:
Expand All @@ -69,30 +77,32 @@ python
```

## Extra configuration for exporting model
The hypeprose python training library handles the whole pipelines for developing the pose estimation system, including training, evaluating and testing. Its goal is to produce a .npz file that contains the well-trained model weights.
The hypeprose python training library handles the whole pipelines for developing the pose estimation system, including training, evaluating and testing. Its goal is to produce a **.npz** file that contains the well-trained model weights.

For the training platform, the enviroment configuration above is engough. However, most inference engine only accept .pb format or .onnx format model, such as [TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html). Thus, one need to convert the trained model loaded with .npz file weight to .pb format or .onnx format for further deployment, which need extra configuration below:<br>
For the training platform, the enviroment configuration above is engough. However, most inference engine only accept .pb format or .onnx format model, such as [TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would mentioning our inference lib make more sense (instead of tensorrt)?


* (I)Convert to .pb format:<br>
Thus, one need to convert the trained model loaded with **.npz** file weight to **.pb** format or **.onnx** format for further deployment, which need extra configuration below:<br>

> **(I)Convert to .pb format:**<br>
To convert the model into .pb format, we use *@tf.function* to decorate the *infer* function of each model class, so we can use the *get_concrete_function* function from tensorflow to consctruct the frozen model computation graph and then save it in .pb format.
We already provide a script with cli to facilitate conversion, which located at [export_pb.py](https://github.com/tensorlayer/hyperpose/blob/master/export_pb.py). What we need here is only **tensorflow** library that we already installed.

* (II)Convert to .onnx format:<br>
We already provide a script with cli to facilitate conversion, which located at [export_pb.py](https://github.com/tensorlayer/hyperpose/blob/master/export_pb.py). What we need here is only *tensorflow* library that we already installed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In future version, I think we'd better put everything related to python training lib into '/hyperpose' folder. And if it is some utility scripts, let's put it in '/script'. Because putting all those .py files on the top-level is very confusing.

For example, I made some useful scripts in the '/script' folder. Also you want to verify them if you make any changes.


> **(II)Convert to .onnx format:**<br>
To convert the model in .onnx format, we need to first convert the model into .pb format, then convert it from .pb format into .onnx format. Two extra library are needed:
* tf2onnx<br>
> **tf2onnx**:<br>
*tf2onnx* is used to convert .pb format model into .onnx format model, is necessary here. details information see [reference](https://github.com/onnx/tensorflow-onnx).
install tf2onnx by running:
```bash
pip install -U tf2onnx
```

* graph_transforms<br>
*graph_transform* is used to check the input and output node of the .pb file if one doesn't know. when convert .pb file into .onnx file using tf2onnx, one is required to provide the input node name and output node name of the computation graph stored in .pb file, so he may need to use *graph_transform* to inspect the .pn file to get node names.<br>
> **graph_transforms**:<br>
*graph_transform* is used to check the input and output node of the .pb file if one doesn't know. when convert .pb file into .onnx file using tf2onnx, one is required to provide the input node name and output node name of the computation graph stored in .pb file, so he may need to use *graph_transform* to inspect the .pb file to get node names.<br>
build graph_transforms according to [tensorflow tools](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms#using-the-graph-transform-tool)

## Extra configuration for parallel training
The hyperpose python training library use the High performance distributed machine learning framework **Kungfu** for parallel training.

The hyperpose python training library use the High performance distributed machine learning framework **Kungfu** for parallel training.<br>
Thus to use the parallel training functionality of hyperpose, please install [Kungfu](https://github.com/lsds/KungFu) according to the official instructon it provides.


Expand Down