Skip to content

Commit b9a8bda

Browse files
committed
Update README, new EfficientNet B3/Lite0 weights, few more docstrings
1 parent 948a91b commit b9a8bda

File tree

7 files changed

+54
-19
lines changed

7 files changed

+54
-19
lines changed

LICENSE

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -186,7 +186,7 @@
186186
same "printed page" as the copyright notice for easier
187187
identification within third-party archives.
188188

189-
Copyright 2019 Ross Wightman
189+
Copyright 2020 Ross Wightman
190190

191191
Licensed under the Apache License, Version 2.0 (the "License");
192192
you may not use this file except in compliance with the License.

README.md

Lines changed: 34 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,14 @@ All models are implemented by GenEfficientNet or MobileNetV3 classes, with strin
66

77
## What's New
88

9+
### Aug 19, 2020
10+
* Add updated PyTorch trained EfficientNet-B3 weights trained by myself with `timm` (82.1 top-1)
11+
* Add PyTorch trained EfficientNet-Lite0 contributed by [@hal-314](https://github.com/hal-314) (75.5 top-1)
12+
* Update ONNX and Caffe2 export / utility scripts to work with latest PyTorch / ONNX
13+
* ONNX runtime based validation script added
14+
* activations (mostly) brought in sync with `timm` equivalents
15+
16+
917
### April 5, 2020
1018
* Add some newly trained MobileNet-V2 models trained with latest h-params, rand augment. They compare quite favourably to EfficientNet-Lite
1119
* 3.5M param MobileNet-V2 100 @ 73%
@@ -77,8 +85,8 @@ I've managed to train several of the models to accuracies close to or above the
7785

7886
|Model | Prec@1 (Err) | Prec@5 (Err) | Param#(M) | MAdds(M) | Image Scaling | Resolution | Crop |
7987
|---|---|---|---|---|---|---|---|
80-
| efficientnet_b3 | 81.866 (18.134) | 95.836 (4.164) | 12.23 | TBD | bicubic | 320 | 1.0 |
81-
| efficientnet_b3 | 81.508 (18.492) | 95.672 (4.328) | 12.23 | TBD | bicubic | 300 | 0.904 |
88+
| efficientnet_b3 | 82.240 (17.760) | 96.116 (3.884) | 12.23 | TBD | bicubic | 320 | 1.0 |
89+
| efficientnet_b3 | 82.076 (17.924) | 96.020 (3.980) | 12.23 | TBD | bicubic | 300 | 0.904 |
8290
| mixnet_xl | 81.074 (18.926) | 95.282 (4.718) | 11.90 | TBD | bicubic | 256 | 1.0 |
8391
| efficientnet_b2 | 80.612 (19.388) | 95.318 (4.682) | 9.1 | TBD | bicubic | 288 | 1.0 |
8492
| mixnet_xl | 80.476 (19.524) | 94.936 (5.064) | 11.90 | TBD | bicubic | 224 | 0.875 |
@@ -93,6 +101,7 @@ I've managed to train several of the models to accuracies close to or above the
93101
| mixnet_s | 75.988 (24.012) | 92.794 (7.206) | 4.13 | TBD | bicubic | 224 | 0.875 |
94102
| mobilenetv3_large_100 | 75.766 (24.234) | 92.542 (7.458) | 5.5 | TBD | bicubic | 224 | 0.875 |
95103
| mobilenetv3_rw | 75.634 (24.366) | 92.708 (7.292) | 5.5 | 219 | bicubic | 224 | 0.875 |
104+
| efficientnet_lite0 | 75.472 (24.528) | 92.520 (7.480) | 4.65 | TBD | bicubic | 224 | 0.875 |
96105
| mnasnet_a1 | 75.448 (24.552) | 92.604 (7.396) | 3.9 | 312 | bicubic | 224 | 0.875 |
97106
| fbnetc_100 | 75.124 (24.876) | 92.386 (7.614) | 5.6 | 385 | bilinear | 224 | 0.875 |
98107
| mobilenetv2_110d | 75.052 (24.948) | 92.180 (7.820) | 4.5 | TBD | bicubic | 224 | 0.875 |
@@ -232,17 +241,17 @@ Google tf and tflite weights ported from official Tensorflow repositories
232241

233242
### Environment
234243

235-
All development and testing has been done in Conda Python 3 environments on Linux x86-64 systems, specifically Python 3.6.x and 3.7.x.
244+
All development and testing has been done in Conda Python 3 environments on Linux x86-64 systems, specifically Python 3.6.x, 3.7.x, 3.8.x.
236245

237246
Users have reported that a Python 3 Anaconda install in Windows works. I have not verified this myself.
238247

239-
PyTorch versions 1.2. 1.3.1, 1.4 have been tested with this code.
248+
PyTorch versions 1.4, 1.5, 1.6 have been tested with this code.
240249

241250
I've tried to keep the dependencies minimal, the setup is as per the PyTorch default install instructions for Conda:
242251
```
243252
conda create -n torch-env
244253
conda activate torch-env
245-
conda install -c pytorch pytorch torchvision cudatoolkit=10
254+
conda install -c pytorch pytorch torchvision cudatoolkit=10.2
246255
```
247256

248257
### PyTorch Hub
@@ -268,7 +277,7 @@ pip install geffnet
268277
Eval use:
269278
```
270279
>>> import geffnet
271-
>>> m = geffnet.create_model('mobilenetv3_rw', pretrained=True)
280+
>>> m = geffnet.create_model('mobilenetv3_large_100', pretrained=True)
272281
>>> m.eval()
273282
```
274283

@@ -288,14 +297,27 @@ Create in a nn.Sequential container, for fast.ai, etc:
288297

289298
### Exporting
290299

291-
Scripts to export models to ONNX and then to Caffe2 are included, along with a Caffe2 script to verify.
300+
Scripts are included to
301+
* export models to ONNX (`onnx_export.py`)
302+
* optimized ONNX graph (`onnx_optimize.py` or `onnx_validate.py` w/ `--onnx-output-opt` arg)
303+
* validate with ONNX runtime (`onnx_validate.py`)
304+
* convert ONNX model to Caffe2 (`onnx_to_caffe.py`)
305+
* validate in Caffe2 (`caffe2_validate.py`)
306+
* benchmark in Caffe2 w/ FLOPs, parameters output (`caffe2_benchmark.py`)
292307

293308
As an example, to export the MobileNet-V3 pretrained model and then run an Imagenet validation:
294309
```
295-
python onnx_export.py --model tf_mobilenetv3_large_100 ./mobilenetv3_100.onnx
296-
python onnx_optimize.py ./mobilenetv3_100.onnx --output ./mobilenetv3_100-opt.onnx
297-
python onnx_to_caffe.py ./mobilenetv3_100-opt.onnx --c2-prefix mobilenetv3
298-
python caffe2_validate.py /imagenet/validation/ --c2-init ./mobilenetv3.init.pb --c2-predict ./mobilenetv3.predict.pb --interpolation bicubic
310+
python onnx_export.py --model mobilenetv3_large_100 ./mobilenetv3_100.onnx
311+
python onnx_validate.py /imagenet/validation/ --onnx-input ./mobilenetv3_100.onnx
299312
```
300-
**NOTE** the TF ported weights with the 'SAME' conv padding activated cannot be exported to ONNX unless `_EXPORTABLE` flag in `config.py` is set to True. Use `config.set_exportable(True)` as in the updated `onnx_export.py` example script.
313+
314+
These scripts were tested to be working as of PyTorch 1.6 and ONNX 1.7 w/ ONNX runtime 1.4. Caffe2 compatible
315+
export now requires additional args mentioned in the export script (not needed in earlier versions).
316+
317+
#### Export Notes
318+
1. The TF ported weights with the 'SAME' conv padding activated cannot be exported to ONNX unless `_EXPORTABLE` flag in `config.py` is set to True. Use `config.set_exportable(True)` as in the `onnx_export.py` script.
319+
2. TF ported models with 'SAME' padding will have the padding fixed at export time to the resolution used for export. Even though dynamic padding is supported in opset >= 11, I can't get it working.
320+
3. ONNX optimize facility doesn't work reliably in PyTorch 1.6 / ONNX 1.7. Fortunately, the onnxruntime based inference is working very well now and includes on the fly optimization.
321+
3. ONNX / Caffe2 export/import frequently breaks with different PyTorch and ONNX version releases. Please check their respective issue trackers before filing issues here.
322+
301323

data/dataset.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
1-
from __future__ import absolute_import
2-
from __future__ import division
3-
from __future__ import print_function
1+
""" Quick n simple image folder dataset
42
3+
Copyright 2020 Ross Wightman
4+
"""
55
import torch.utils.data as data
66

77
import os

data/loader.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,10 @@
1+
""" Fast Collate, CUDA Prefetcher
2+
3+
Prefetcher and Fast Collate inspired by NVIDIA APEX example at
4+
https://github.com/NVIDIA/apex/commit/d5e2bb4bdeedd27b1dfaf5bb2b24d6c000dee9be#diff-cf86c282ff7fba81fad27a559379d5bf
5+
6+
Hacked together by / Copyright 2020 Ross Wightman
7+
"""
18
import torch
29
import torch.utils.data
310
from .transforms import *

geffnet/efficientnet_builder.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
""" EfficientNet / MobileNetV3 Blocks and Builder
2+
3+
Copyright 2020 Ross Wightman
4+
"""
15
import re
26
from copy import deepcopy
37

geffnet/gen_efficientnet.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@
2424
2525
* And likely more...
2626
27-
Hacked together by Ross Wightman
27+
Hacked together by / Copyright 2020 Ross Wightman
2828
"""
2929
import torch.nn as nn
3030
import torch.nn.functional as F
@@ -89,7 +89,7 @@
8989
'efficientnet_b2':
9090
'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_b2_ra-bcdf34b7.pth',
9191
'efficientnet_b3':
92-
'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_b3_ra-a5e2fbc7.pth',
92+
'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_b3_ra2-cf984f9c.pth',
9393
'efficientnet_b4': None,
9494
'efficientnet_b5': None,
9595
'efficientnet_b6': None,
@@ -106,7 +106,7 @@
106106
'efficientnet_cc_b0_8e': None,
107107
'efficientnet_cc_b1_8e': None,
108108

109-
'efficientnet_lite0': None,
109+
'efficientnet_lite0': 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_lite0_ra-37913777.pth',
110110
'efficientnet_lite1': None,
111111
'efficientnet_lite2': None,
112112
'efficientnet_lite3': None,

hubconf.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@
77

88
from geffnet import efficientnet_es
99

10+
from geffnet import efficientnet_lite0
11+
1012
from geffnet import mixnet_s
1113
from geffnet import mixnet_m
1214
from geffnet import mixnet_l

0 commit comments

Comments
 (0)