Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
78fa077
Leverage python hierachical logger
Jun 9, 2020
d23a269
Working on feature extraction, interfaces refined, a number of models…
rwightman Jun 30, 2020
d0113f9
Fix a few issues that came up in tests
rwightman Jun 30, 2020
7729f40
Fix another bug, update all gluon resnet models to use new creation m…
rwightman Jun 30, 2020
a66df5f
More model feature extraction support, start to deprecate senet.py, d…
rwightman Jul 3, 2020
f122f02
Significant ResNet refactor:
rwightman Jul 5, 2020
3aebc2f
Switch DPN to use BnAct layer, train a new DPN 68b model with RA to 7…
rwightman Jul 12, 2020
3b6cce4
Add initial impl of CrossStagePartial networks, yet to be trained, no…
rwightman Jul 13, 2020
e2cc481
Update CSP ResNets for cross expansion without activation. Fix VovNet…
rwightman Jul 13, 2020
3b9004b
Lots of changes to model creation helpers, close to finalizing featur…
rwightman Jul 18, 2020
298fba0
Back out some activation hacks trialing upcoming pytorch changes
rwightman Jul 18, 2020
9eba134
More models supporting feature extraction, xception, gluon_xception, …
rwightman Jul 19, 2020
6eec3fb
Move FeatureHooks into features.py, switch EfficientNet, MobileNetV3 …
rwightman Jul 19, 2020
4e61c6a
Cleanup, refactoring of Feature extraction code, add tests, fix tests…
rwightman Jul 20, 2020
68fd8a2
Merge branch 'master' into features
rwightman Jul 20, 2020
c146b54
Cleanup EfficientNet/MobileNetV3 feature extraction a bit, only two t…
rwightman Jul 21, 2020
52b6e72
Try defining num threads for github workflow tests explicity (2)
rwightman Jul 21, 2020
648ba41
Try again with the worfklow threads
rwightman Jul 21, 2020
701dba3
Again
rwightman Jul 21, 2020
2ac663f
Add feature support to legacy senets, add 32x32 resnext models to exc…
rwightman Jul 21, 2020
c9d54bc
Add HRNet feature extraction, fix senet type, lower feature testing r…
rwightman Jul 22, 2020
7ba5a38
Add ReXNet w/ remapped weights, feature support
rwightman Jul 23, 2020
08016e8
Cleanup FeatureInfo getters, add TF models sourced Xception41/65/71 w…
rwightman Jul 25, 2020
ec37008
Add pretrained weight links to CSPNet for cspdarknet53, cspresnext50
rwightman Jul 27, 2020
14ef7a0
Rename csp.py -> cspnet.py
rwightman Jul 27, 2020
a69c0e0
Fix pool size in cspnet
rwightman Jul 27, 2020
6c17d57
Fix some attributions, add copyrights to some file docstrings
rwightman Jul 27, 2020
6be878c
Update results README to include robustness and real labels tests. In…
rwightman Jul 27, 2020
1998bd3
Merge branch 'feature/AB/logger' of https://github.com/antoinebrl/pyt…
rwightman Jul 27, 2020
7995295
Merge branch 'logger' into features. Change 'logger' to '_logger'.
rwightman Jul 28, 2020
ea58e0b
Disable big models for MacOS test since they are starting to fail fre…
rwightman Jul 28, 2020
4805dd1
Fix incorrect dataset name, add new datasets to results delta script
rwightman Jul 28, 2020
9ecd16b
Add new seresnet50 (non-legacy) model weights, 80.274 top-1
rwightman Jul 29, 2020
ec4976f
Add EfficientNet-Lite0 weights trained with this code by @hal-314, 75…
rwightman Jul 29, 2020
c53ec33
Add synset/label indices for results generation. Add 'valid labels' t…
rwightman Jul 29, 2020
ac18adb
Remove debug print from RexNet
rwightman Jul 29, 2020
ecdeb47
Cleanup real labels in validate.py, no need to track original, will d…
rwightman Jul 29, 2020
d72ddaf
Fix some checkpoint / model str regressions
rwightman Jul 30, 2020
ad150e7
Update results csv file rank/diff script and small validate script tw…
rwightman Aug 1, 2020
b496b7b
Re-ran batch validation on all models across all sets
rwightman Aug 1, 2020
9806f3e
autosquash github workflow didn't work out, removing
rwightman Aug 3, 2020
b1f1a54
More uniform treatment of classifiers across all models, reduce code …
rwightman Aug 4, 2020
e3c11a3
Start updating README and docs
rwightman Aug 4, 2020
e3f58fc
Continuing README and documentation updates
rwightman Aug 5, 2020
fa28067
Add more augmentation arguments, including a no_aug disable flag. Fix…
rwightman Aug 5, 2020
dfe8041
Add bool arg helper
rwightman Aug 5, 2020
5e333b8
README / doc tweaks
rwightman Aug 5, 2020
e62758c
More documentation updates, fix a typo
rwightman Aug 5, 2020
1696499
Bump version to 0.2.0, ready to roll (I think)
rwightman Aug 5, 2020
1a15502
mkdocs sortable tables.js
rwightman Aug 5, 2020
File filter

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 0 additions & 39 deletions .github/workflows/autosquash.yml

This file was deleted.

4 changes: 4 additions & 0 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,10 @@ on:
pull_request:
branches: [ master ]

env:
OMP_NUM_THREADS: 2
MKL_NUM_THREADS: 2

jobs:
test:
name: Run tests on ${{ matrix.os }} with Python ${{ matrix.python }}
Expand Down
476 changes: 92 additions & 384 deletions README.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion avg_checkpoints.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
EMA (exponential moving average) of the model weights or performing SWA (stochastic
weight averaging), but post-training.

Hacked together by Ross Wightman (https://github.com/rwightman)
Hacked together by / Copyright 2020 Ross Wightman (https://github.com/rwightman)
"""
import torch
import argparse
Expand Down
2 changes: 1 addition & 1 deletion clean_checkpoint.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
and outputs a CPU tensor checkpoint with only the `state_dict` along with SHA256
calculation for model zoo compatibility.

Hacked together by Ross Wightman (https://github.com/rwightman)
Hacked together by / Copyright 2020 Ross Wightman (https://github.com/rwightman)
"""
import torch
import argparse
Expand Down
83 changes: 83 additions & 0 deletions docs/archived_changes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
# Archived Changes

### Feb 29, 2020
* New MobileNet-V3 Large weights trained from stratch with this code to 75.77% top-1
* IMPORTANT CHANGE - default weight init changed for all MobilenetV3 / EfficientNet / related models
* overall results similar to a bit better training from scratch on a few smaller models tried
* performance early in training seems consistently improved but less difference by end
* set `fix_group_fanout=False` in `_init_weight_goog` fn if you need to reproducte past behaviour
* Experimental LR noise feature added applies a random perturbation to LR each epoch in specified range of training

### Feb 18, 2020
* Big refactor of model layers and addition of several attention mechanisms. Several additions motivated by 'Compounding the Performance Improvements...' (https://arxiv.org/abs/2001.06268):
* Move layer/module impl into `layers` subfolder/module of `models` and organize in a more granular fashion
* ResNet downsample paths now properly support dilation (output stride != 32) for avg_pool ('D' variant) and 3x3 (SENets) networks
* Add Selective Kernel Nets on top of ResNet base, pretrained weights
* skresnet18 - 73% top-1
* skresnet34 - 76.9% top-1
* skresnext50_32x4d (equiv to SKNet50) - 80.2% top-1
* ECA and CECA (circular padding) attention layer contributed by [Chris Ha](https://github.com/VRandme)
* CBAM attention experiment (not the best results so far, may remove)
* Attention factory to allow dynamically selecting one of SE, ECA, CBAM in the `.se` position for all ResNets
* Add DropBlock and DropPath (formerly DropConnect for EfficientNet/MobileNetv3) support to all ResNet variants
* Full dataset results updated that incl NoisyStudent weights and 2 of the 3 SK weights

### Feb 12, 2020
* Add EfficientNet-L2 and B0-B7 NoisyStudent weights ported from [Tensorflow TPU](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet)

### Feb 6, 2020
* Add RandAugment trained EfficientNet-ES (EdgeTPU-Small) weights with 78.1 top-1. Trained by [Andrew Lavin](https://github.com/andravin) (see Training section for hparams)

### Feb 1/2, 2020
* Port new EfficientNet-B8 (RandAugment) weights, these are different than the B8 AdvProp, different input normalization.
* Update results csv files on all models for ImageNet validation and three other test sets
* Push PyPi package update

### Jan 31, 2020
* Update ResNet50 weights with a new 79.038 result from further JSD / AugMix experiments. Full command line for reproduction in training section below.

### Jan 11/12, 2020
* Master may be a bit unstable wrt to training, these changes have been tested but not all combos
* Implementations of AugMix added to existing RA and AA. Including numerous supporting pieces like JSD loss (Jensen-Shannon divergence + CE), and AugMixDataset
* SplitBatchNorm adaptation layer added for implementing Auxiliary BN as per AdvProp paper
* ResNet-50 AugMix trained model w/ 79% top-1 added
* `seresnext26tn_32x4d` - 77.99 top-1, 93.75 top-5 added to tiered experiment, higher img/s than 't' and 'd'

### Jan 3, 2020
* Add RandAugment trained EfficientNet-B0 weight with 77.7 top-1. Trained by [Michael Klachko](https://github.com/michaelklachko) with this code and recent hparams (see Training section)
* Add `avg_checkpoints.py` script for post training weight averaging and update all scripts with header docstrings and shebangs.

### Dec 30, 2019
* Merge [Dushyant Mehta's](https://github.com/mehtadushy) PR for SelecSLS (Selective Short and Long Range Skip Connections) networks. Good GPU memory consumption and throughput. Original: https://github.com/mehtadushy/SelecSLS-Pytorch

### Dec 28, 2019
* Add new model weights and training hparams (see Training Hparams section)
* `efficientnet_b3` - 81.5 top-1, 95.7 top-5 at default res/crop, 81.9, 95.8 at 320x320 1.0 crop-pct
* trained with RandAugment, ended up with an interesting but less than perfect result (see training section)
* `seresnext26d_32x4d`- 77.6 top-1, 93.6 top-5
* deep stem (32, 32, 64), avgpool downsample
* stem/dowsample from bag-of-tricks paper
* `seresnext26t_32x4d`- 78.0 top-1, 93.7 top-5
* deep tiered stem (24, 48, 64), avgpool downsample (a modified 'D' variant)
* stem sizing mods from Jeremy Howard and fastai devs discussing ResNet architecture experiments

### Dec 23, 2019
* Add RandAugment trained MixNet-XL weights with 80.48 top-1.
* `--dist-bn` argument added to train.py, will distribute BN stats between nodes after each train epoch, before eval

### Dec 4, 2019
* Added weights from the first training from scratch of an EfficientNet (B2) with my new RandAugment implementation. Much better than my previous B2 and very close to the official AdvProp ones (80.4 top-1, 95.08 top-5).

### Nov 29, 2019
* Brought EfficientNet and MobileNetV3 up to date with my https://github.com/rwightman/gen-efficientnet-pytorch code. Torchscript and ONNX export compat excluded.
* AdvProp weights added
* Official TF MobileNetv3 weights added
* EfficientNet and MobileNetV3 hook based 'feature extraction' classes added. Will serve as basis for using models as backbones in obj detection/segmentation tasks. Lots more to be done here...
* HRNet classification models and weights added from https://github.com/HRNet/HRNet-Image-Classification
* Consistency in global pooling, `reset_classifer`, and `forward_features` across models
* `forward_features` always returns unpooled feature maps now
* Reasonable chance I broke something... let me know

### Nov 22, 2019
* Add ImageNet training RandAugment implementation alongside AutoAugment. PyTorch Transform compatible format, using PIL. Currently training two EfficientNet models from scratch with promising results... will update.
* `drop-connect` cmd line arg finally added to `train.py`, no need to hack model fns. Works for efficientnet/mobilenetv3 based models, ignored otherwise.
42 changes: 17 additions & 25 deletions docs/changes.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,20 @@
# Recent Changes

### Aug 1, 2020
Universal feature extraction, new models, new weights, new test sets.
* All models support the `features_only=True` argument for `create_model` call to return a network that extracts features from the deepest layer at each stride.
* New models
* CSPResNet, CSPResNeXt, CSPDarkNet, DarkNet
* ReXNet
* (Aligned) Xception41/65/71 (a proper port of TF models)
* New trained weights
* SEResNet50 - 80.3
* CSPDarkNet53 - 80.1 top-1
* CSPResNeXt50 - 80.0 to-1
* DPN68b - 79.2 top-1
* EfficientNet-Lite0 (non-TF ver) - 75.5 (submitted by @hal-314)
* Add 'real' labels for ImageNet and ImageNet-Renditions test set, see [`results/README.md`](results/README.md)

### June 11, 2020
Bunch of changes:

Expand Down Expand Up @@ -35,28 +52,3 @@ Bunch of changes:
### March 18, 2020
* Add EfficientNet-Lite models w/ weights ported from [Tensorflow TPU](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet/lite)
* Add RandAugment trained ResNeXt-50 32x4d weights with 79.8 top-1. Trained by [Andrew Lavin](https://github.com/andravin) (see Training section for hparams)

### Feb 29, 2020
* New MobileNet-V3 Large weights trained from stratch with this code to 75.77% top-1
* IMPORTANT CHANGE - default weight init changed for all MobilenetV3 / EfficientNet / related models
* overall results similar to a bit better training from scratch on a few smaller models tried
* performance early in training seems consistently improved but less difference by end
* set `fix_group_fanout=False` in `_init_weight_goog` fn if you need to reproducte past behaviour
* Experimental LR noise feature added applies a random perturbation to LR each epoch in specified range of training

### Feb 18, 2020
* Big refactor of model layers and addition of several attention mechanisms. Several additions motivated by 'Compounding the Performance Improvements...' (https://arxiv.org/abs/2001.06268):
* Move layer/module impl into `layers` subfolder/module of `models` and organize in a more granular fashion
* ResNet downsample paths now properly support dilation (output stride != 32) for avg_pool ('D' variant) and 3x3 (SENets) networks
* Add Selective Kernel Nets on top of ResNet base, pretrained weights
* skresnet18 - 73% top-1
* skresnet34 - 76.9% top-1
* skresnext50_32x4d (equiv to SKNet50) - 80.2% top-1
* ECA and CECA (circular padding) attention layer contributed by [Chris Ha](https://github.com/VRandme)
* CBAM attention experiment (not the best results so far, may remove)
* Attention factory to allow dynamically selecting one of SE, ECA, CBAM in the `.se` position for all ResNets
* Add DropBlock and DropPath (formerly DropConnect for EfficientNet/MobileNetv3) support to all ResNet variants
* Full dataset results updated that incl NoisyStudent weights and 2 of the 3 SK weights

### Feb 12, 2020
* Add EfficientNet-L2 and B0-B7 NoisyStudent weights ported from [Tensorflow TPU](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet)
173 changes: 173 additions & 0 deletions docs/feature_extraction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,173 @@
# Feature Extraction

All of the models in `timm` have consistent mechanisms for obtaining various types of features from the model for tasks besides classification.

## Penultimate Layer Features (Pre-Classifier Features)

The features from the penultimate model layer can be obtained in severay ways without requiring model surgery (although feel free to do surgery). One must first decide if they want pooled or un-pooled features.

### Unpooled

There are three ways to obtain unpooled features.

Without modifying the network, one can call `model.forward_features(input)` on any model instead of the usual `model(input)`. This will bypass the head classifier and global pooling for networks.

If one wants to explicitly modify the network to return unpooled features, they can either create the model without a classifier and pooling, or remove it later. Both paths remove the parameters associated with the classifier from the network.

#### forward_features()
```python hl_lines="3 6"
import torch
import timm
m = timm.create_model('xception41', pretrained=True)
o = m(torch.randn(2, 3, 299, 299))
print(f'Original shape: {o.shape}')
o = m.forward_features(torch.randn(2, 3, 299, 299))
print(f'Unpooled shape: {o.shape}')
```
Output:
```text
Original shape: torch.Size([2, 1000])
Unpooled shape: torch.Size([2, 2048, 10, 10])
```

#### Create with no classifier and pooling
```python hl_lines="3"
import torch
import timm
m = timm.create_model('resnet50', pretrained=True, num_classes=0, global_pool='')
o = m(torch.randn(2, 3, 224, 224))
print(f'Unpooled shape: {o.shape}')
```
Output:
```text
Unpooled shape: torch.Size([2, 2048, 7, 7])
```

#### Remove it later
```python hl_lines="3 6"
import torch
import timm
m = timm.create_model('densenet121', pretrained=True)
o = m(torch.randn(2, 3, 224, 224))
print(f'Original shape: {o.shape}')
m.reset_classifier(0, '')
o = m(torch.randn(2, 3, 224, 224))
print(f'Unpooled shape: {o.shape}')
```
Output:
```text
Original shape: torch.Size([2, 1000])
Unpooled shape: torch.Size([2, 1024, 7, 7])
```

### Pooled

To modify the network to return pooled features, one can use `forward_features()` and pool/flatten the result themselves, or modify the network like above but keep pooling intact.

#### Create with no classifier
```python hl_lines="3"
import torch
import timm
m = timm.create_model('resnet50', pretrained=True, num_classes=0)
o = m(torch.randn(2, 3, 224, 224))
print(f'Pooled shape: {o.shape}')
```
Output:
```text
Pooled shape: torch.Size([2, 2048])
```

#### Remove it later
```python hl_lines="3 6"
import torch
import timm
m = timm.create_model('ese_vovnet19b_dw', pretrained=True)
o = m(torch.randn(2, 3, 224, 224))
print(f'Original shape: {o.shape}')
m.reset_classifier(0)
o = m(torch.randn(2, 3, 224, 224))
print(f'Pooled shape: {o.shape}')
```
Output:
```text
Pooled shape: torch.Size([2, 1024])
```


## Multi-scale Feature Maps (Feature Pyramid)

Object detection, segmentation, keypoint, and a variety of dense pixel tasks require access to feature maps from the backbone network at multiple scales. This is often done by modifying the original classification network. Since each network varies quite a bit in structure, it's not uncommon to see only a few backbones supported in any given obj detection or segmentation library.

`timm` allows a consistent interface for creating any of the included models as feature backbones that output feature maps for selected levels.

A feature backbone can be created by adding the argument `features_only=True` to any `create_model` call. By default 5 strides will be output from most models (not all have that many), with the first starting at 2 (some start at 1 or 4).

### Create a feature map extraction model
```python hl_lines="3"
import torch
import timm
m = timm.create_model('resnest26d', features_only=True, pretrained=True)
o = m(torch.randn(2, 3, 224, 224))
for x in o:
print(x.shape)
```
Output:
```text
torch.Size([2, 64, 112, 112])
torch.Size([2, 256, 56, 56])
torch.Size([2, 512, 28, 28])
torch.Size([2, 1024, 14, 14])
torch.Size([2, 2048, 7, 7])
```

### Query the feature information

After a feature backbone has been created, it can be queried to provide channel or resolution reduction information to the downstream heads without requiring static config or hardcoded constants. The `.feature_info` attribute is a class encapsulating the information about the feature extraction points.

```python hl_lines="3 4"
import torch
import timm
m = timm.create_model('regnety_032', features_only=True, pretrained=True)
print(f'Feature channels: {m.feature_info.channels()}')
o = m(torch.randn(2, 3, 224, 224))
for x in o:
print(x.shape)
```
Output:
```text
Feature channels: [32, 72, 216, 576, 1512]
torch.Size([2, 32, 112, 112])
torch.Size([2, 72, 56, 56])
torch.Size([2, 216, 28, 28])
torch.Size([2, 576, 14, 14])
torch.Size([2, 1512, 7, 7])
```

### Select specific feature levels or limit the stride

There are to additional creation arguments impacting the output features.

* `out_indices` selects which indices to output
* `output_stride` limits the feature output stride of the network (also works in classification mode BTW)

`out_indices` is supported by all models, but not all models have the same index to feature stride mapping. Look at the code or check feature_info to compare. The out indices generally correspond to the `C(i+1)th` feature level (a `2^(i+1)` reduction). For most models, index 0 is the stride 2 features, and index 4 is stride 32.

`output_stride` is achieved by converting layers to use dilated convolutions. Doing so is not always straightforward, some networks only support `output_stride=32`.

```python hl_lines="3 4 5"
import torch
import timm
m = timm.create_model('ecaresnet101d', features_only=True, output_stride=8, out_indices=(2, 4), pretrained=True)
print(f'Feature channels: {m.feature_info.channels()}')
print(f'Feature reduction: {m.feature_info.reduction()}')
o = m(torch.randn(2, 3, 320, 320))
for x in o:
print(x.shape)
```
Output:
```text
Feature channels: [512, 2048]
Feature reduction: [8, 8]
torch.Size([2, 512, 40, 40])
torch.Size([2, 2048, 40, 40])
```
Loading