Skip to content

Commit 10a8668

Browse files
committed
:paper: updated readme for tensorflow serving example
Signed-off-by: Harshad Reddy Nalla <hnalla@redhat.com>
1 parent d634b70 commit 10a8668

File tree

1 file changed

+35
-1
lines changed

1 file changed

+35
-1
lines changed

README.md

Lines changed: 35 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,36 @@
11
# Tensorflow-model-server-RPM
2-
A Sample Example of Tensorflow model serving via centos6 based tensorflow model server rpm.
2+
3+
A Sample Example of Tensorflow model serving via centos6 based TensorFlow Model Server RPM and interact with it using TensorFlow Serving API.
4+
5+
## Description
6+
7+
Tensorflow and TensorFlow Serving API Wheels could be found at [AICoE Index](https://tensorflow.pypi.thoth-station.ninja)
8+
9+
The TensorFlow Model Server RPM are generated from the tensorflow_model_server[Ex tf-r.1.14](https://github.com/AICoE/tensorflow-wheels/releases/tag/tensorflow_serving_api-r1.14-cpu-2019-08-08_154435)] binary released by [Red Hat AICoE](https://github.com/AICoE/tensorflow-wheels) along with the tensorflow serving api[[Ex tf-r.1.14](https://github.com/AICoE/tensorflow-wheels/releases/tag/tensorflow_serving_api-r1.14-cpu-2019-08-08_154435)].
10+
11+
This is a sample example project to showcase how to use the tensorflow model server rpm for simple tensorflow model serving.
12+
13+
## How to use
14+
15+
It is a two step process:
16+
17+
- one on the serving side, where the model would be developed, trained and tested, and then final been served via the TensorFlow Model Server.
18+
- second on the client side, where the model would be used via TensorFlow Serving API.
19+
20+
A sample model is provided in the repo to serving.
21+
22+
- tf_model.py : model to be served
23+
- client.py : request the model.
24+
25+
### Serving side
26+
27+
- Use the example fedora 28 based Dockerfile to create a container image, which installs tensorflow_model_server rpm and serves the provided valid model. `podman build --build-arg WHL=tensorflow-1.14.1-cp36-cp36m-linux_x86_64.whl --build-arg RPM=tensorflow-model-serving-1.14-1.0-1.x86_64.rpm -t tensoflow-serving -f Dockerfile .`
28+
29+
- Run the container for serving the model via using tensorflow model server rpm. `podman run -p 9000:80 -it localhost/tensorflow-serving`
30+
31+
### Client side
32+
33+
- The example client side will using the tensorflow serving api to request the served model.
34+
- Example client side [code](client.py) and [tensorflow_serving_api](tensorflow_serving_api-1.14.0-py2.py3-none-any.whl) for installation are provided in the repo.
35+
- The following code could be used to request the model.<br>
36+
`pipenv run python3 client.py`

0 commit comments

Comments
 (0)