Skip to content

Commit c42144d

Browse files
zhijxu-MSprasanthpul
authored andcommitted
add tutorial to show how to use tf2onnx to convert tensorflow model to onnx (onnx#131)
* add tutorial to show how to use tf2onnx to convert tensorflow model to onnx * update link in README
1 parent 851d1eb commit c42144d

File tree

4 files changed

+595
-1
lines changed

4 files changed

+595
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ These images are available for convenience to get started with ONNX and tutorial
3131
| [PyTorch](http://pytorch.org/) | [part of pytorch package](http://pytorch.org/docs/master/onnx.html) | [Example](tutorials/PytorchOnnxExport.ipynb), [exporting different ONNX opsets](https://github.com/onnx/tutorials/blob/master/tutorials/ExportModelFromPyTorchToDifferentONNXOpsetVersions.md), [Extending support](tutorials/PytorchAddExportSupport.md) |
3232
| [SciKit-Learn](http://scikit-learn.org/) | [onnx/sklearn-onnx](https://github.com/onnx/sklearn-onnx) | [Example](http://onnx.ai/sklearn-onnx/index.html) | n/a |
3333
| [SINGA (Apache)](http://singa.apache.org/) - [Github](https://github.com/apache/incubator-singa/blob/master/python/singa/sonnx.py) (experimental) | [built-in](https://github.com/apache/incubator-singa/blob/master/doc/en/docs/installation.md) | [Example](https://github.com/apache/incubator-singa/tree/master/examples/onnx) |
34-
| [TensorFlow](https://www.tensorflow.org/) | [onnx/tensorflow-onnx](https://github.com/onnx/tensorflow-onnx) | [Examples](https://github.com/onnx/tensorflow-onnx/tree/master/examples) |
34+
| [TensorFlow](https://www.tensorflow.org/) | [onnx/tensorflow-onnx](https://github.com/onnx/tensorflow-onnx) | [Examples](https://github.com/onnx/tutorials/blob/master/tutorials/TensorflowToOnnx-1.ipynb) |
3535

3636

3737
## Scoring ONNX Models

tutorials/TensorflowToOnnx-1.ipynb

Lines changed: 184 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,184 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Convert Tensorflow model to ONNX\n",
8+
"Tensorflow and ONNX both define their own graph format to represent to model. You can use [tensorflow-onnx](https://github.com/onnx/tensorflow-onnx \"Title\") to export a Tensorflow model to ONNX.\n",
9+
"\n",
10+
"We divide the guide into 2 parts: part 1 covers basic conversion and part 2 advanced topics. The following content will be covered in order:\n",
11+
"1. Procedures to convert tensorflow model\n",
12+
" - get tensorflow model\n",
13+
" - convert to ONNX\n",
14+
" - validate\n",
15+
"2. Key conceptions\n",
16+
" - opset\n",
17+
" - data format"
18+
]
19+
},
20+
{
21+
"cell_type": "markdown",
22+
"metadata": {},
23+
"source": [
24+
"## Step 1 - Get Tensorflow model\n",
25+
"Tensorflow uses several file formats to represent a model, such as checkpoint files, graph with weight(called `frozen graph` next) and saved_model, and it has APIs to generate these files, you can find the code snippets in the script [tensorflow_to_onnx_example.py](./assets/tensorflow_to_onnx_example.py)\n",
26+
"\n",
27+
"And `tensorflow-onnx` can accept all the three formats to represent a Tensorflow model, **the format \"saved_model\" should be the preference** since it doesn't require the user to specify input and output names of graph.\n",
28+
"we will cover it in this section and cover the other two in the last section. And also, you could get more detail from `tensorflow-onnx`'s [README](https://github.com/onnx/tensorflow-onnx/blob/master/README.md \"Title\") file."
29+
]
30+
},
31+
{
32+
"cell_type": "code",
33+
"execution_count": 12,
34+
"metadata": {
35+
"scrolled": false
36+
},
37+
"outputs": [
38+
{
39+
"name": "stdout",
40+
"output_type": "stream",
41+
"text": [
42+
"please wait for a while, because the script will train MNIST from scratch\n",
43+
"Extracting /tmp/tensorflow/mnist/input_data/train-images-idx3-ubyte.gz\n",
44+
"Extracting /tmp/tensorflow/mnist/input_data/train-labels-idx1-ubyte.gz\n",
45+
"Extracting /tmp/tensorflow/mnist/input_data/t10k-images-idx3-ubyte.gz\n",
46+
"Extracting /tmp/tensorflow/mnist/input_data/t10k-labels-idx1-ubyte.gz\n",
47+
"step 0, training accuracy 0.18\n",
48+
"step 1000, training accuracy 0.98\n",
49+
"step 2000, training accuracy 0.94\n",
50+
"step 3000, training accuracy 1\n",
51+
"step 4000, training accuracy 1\n",
52+
"test accuracy 0.976\n",
53+
"save tensorflow in format \"saved_model\"\n"
54+
]
55+
}
56+
],
57+
"source": [
58+
"import os\n",
59+
"import shutil\n",
60+
"import tensorflow as tf\n",
61+
"from assets.tensorflow_to_onnx_example import create_and_train_mnist\n",
62+
"def save_model_to_saved_model(sess, input_tensor, output_tensor):\n",
63+
" from tensorflow.saved_model import simple_save\n",
64+
" save_path = r\"./output/saved_model\"\n",
65+
" if os.path.exists(save_path):\n",
66+
" shutil.rmtree(save_path)\n",
67+
" simple_save(sess, save_path, {input_tensor.name: input_tensor}, {output_tensor.name: output_tensor})\n",
68+
"\n",
69+
"print(\"please wait for a while, because the script will train MNIST from scratch\")\n",
70+
"tf.reset_default_graph()\n",
71+
"sess_tf, saver, input_tensor, output_tensor = create_and_train_mnist()\n",
72+
"print(\"save tensorflow in format \\\"saved_model\\\"\")\n",
73+
"save_model_to_saved_model(sess_tf, input_tensor, output_tensor)"
74+
]
75+
},
76+
{
77+
"cell_type": "markdown",
78+
"metadata": {},
79+
"source": [
80+
"## Step 2 - Convert to ONNX\n",
81+
"`tensorflow-onnx` has several entries to convert tensorflow model with different tensorflow formats, this section will cover \"saved_model\" only, \"frozen graph\" and \"checkpoint\" will be covered in [part 2](./TensorflowToOnnx-2.ipynb).\n",
82+
"\n",
83+
"Also, `tensorflow-onnx` has exported related python APIs, so users can call them directly on their script instead of command line, also the detail will be covered in [part 2](./TensorflowToOnnx-2.ipynb)."
84+
]
85+
},
86+
{
87+
"cell_type": "code",
88+
"execution_count": 13,
89+
"metadata": {},
90+
"outputs": [
91+
{
92+
"name": "stdout",
93+
"output_type": "stream",
94+
"text": [
95+
"2019-06-17 07:22:03,871 - INFO - Using tensorflow=1.12.0, onnx=1.5.0, tf2onnx=1.5.1/0c735a\n",
96+
"2019-06-17 07:22:03,871 - INFO - Using opset <onnx, 7>\n",
97+
"2019-06-17 07:22:03,989 - INFO - \n",
98+
"2019-06-17 07:22:04,012 - INFO - Optimizing ONNX model\n",
99+
"2019-06-17 07:22:04,029 - INFO - After optimization: Add -2 (4->2), Identity -3 (3->0), Transpose -8 (9->1)\n",
100+
"2019-06-17 07:22:04,031 - INFO - \n",
101+
"2019-06-17 07:22:04,032 - INFO - Successfully converted TensorFlow model ./output/saved_model to ONNX\n",
102+
"2019-06-17 07:22:04,044 - INFO - ONNX model is saved at ./output/mnist1.onnx\n"
103+
]
104+
}
105+
],
106+
"source": [
107+
"# generating mnist.onnx using saved_model\n",
108+
"!python -m tf2onnx.convert \\\n",
109+
" --saved-model ./output/saved_model \\\n",
110+
" --output ./output/mnist1.onnx \\\n",
111+
" --opset 7"
112+
]
113+
},
114+
{
115+
"cell_type": "markdown",
116+
"metadata": {},
117+
"source": [
118+
"## Step 3 - Validate\n",
119+
"There are several framework can run model in ONNX format, here [ONNXRuntime](https://github.com/microsoft/onnxruntime \"Title\") , opensourced by `Microsoft`, is used to make sure the generated ONNX graph behaves well.\n",
120+
"The input \"image.npz\" is an image of handwritten \"7\", so the expected classification result of model should be \"7\". "
121+
]
122+
},
123+
{
124+
"cell_type": "code",
125+
"execution_count": 14,
126+
"metadata": {
127+
"scrolled": false
128+
},
129+
"outputs": [
130+
{
131+
"name": "stdout",
132+
"output_type": "stream",
133+
"text": [
134+
"the expected result is \"7\"\n",
135+
"the digit is classified as \"7\" in ONNXRruntime\n"
136+
]
137+
}
138+
],
139+
"source": [
140+
"import numpy as np\n",
141+
"import onnxruntime as ort\n",
142+
"\n",
143+
"img = np.load(\"./assets/image.npz\").reshape([1, 784]) \n",
144+
"sess_ort = ort.InferenceSession(\"./output/mnist1.onnx\")\n",
145+
"res = sess_ort.run(output_names=[output_tensor.name], input_feed={input_tensor.name: img})\n",
146+
"print(\"the expected result is \\\"7\\\"\")\n",
147+
"print(\"the digit is classified as \\\"%s\\\" in ONNXRruntime\"%np.argmax(res))"
148+
]
149+
},
150+
{
151+
"cell_type": "markdown",
152+
"metadata": {},
153+
"source": [
154+
"## Key conceptions\n",
155+
"This command line should work for most tensorflow models if they are available a saved_model. In some cases you might encounter issues that require extra options.\n",
156+
"\n",
157+
"The most important concept is \"**opset** version\": ONNX is an evolving standard, for example it will add more new operations and enhance existing operations, so different opset version will contain different operations and operations may have different behavior. The default version \"tensorflow-onnx\" used is 7 and ONNX supports version 10 now, so if the conversion failed, you may try different version, by command line option \"--opset\", to see if it works.\n",
158+
"\n",
159+
"Continue with [part 2](./TensorflowToOnnx-2.ipynb) that explains advanced topics."
160+
]
161+
}
162+
],
163+
"metadata": {
164+
"kernelspec": {
165+
"display_name": "Python 3",
166+
"language": "python",
167+
"name": "python3"
168+
},
169+
"language_info": {
170+
"codemirror_mode": {
171+
"name": "ipython",
172+
"version": 3
173+
},
174+
"file_extension": ".py",
175+
"mimetype": "text/x-python",
176+
"name": "python",
177+
"nbconvert_exporter": "python",
178+
"pygments_lexer": "ipython3",
179+
"version": "3.5.2"
180+
}
181+
},
182+
"nbformat": 4,
183+
"nbformat_minor": 2
184+
}

0 commit comments

Comments
 (0)