Skip to content

ayasyrev/model_constructor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

model_constructor

Constructor to create pytorch model.

Install

pip install model-constructor

Or install from repo:

pip install git+https://github.com/ayasyrev/model_constructor.git

How to use

First import constructor class, then create model constructor object.

Now you can change every part of model.

from model_constructor import ModelConstructor
mc = ModelConstructor()
mc
MC constructor in_chans: 3, num_classes: 1000 expansion: 1, groups: 1, dw: False, div_groups: None sa: False, se: False stem sizes: [3, 32, 32, 64], stride on 0 body sizes [64, 128, 256, 512] layers: [2, 2, 2, 2] 

Now we have model constructor, default setting as xresnet18. And we can get model after call it.

#collapse_output model = mc() model
Output details ...
Sequential( MC (stem): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (stem_pool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) ) (body): Sequential( (l_0): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) ) (l_1): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) ) (l_2): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) ) (l_3): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) ) ) (head): Sequential( (pool): AdaptiveAvgPool2d(output_size=1) (flat): Flatten(start_dim=1, end_dim=-1) (fc): Linear(in_features=512, out_features=1000, bias=True) ) ) 

If you want to change model, just change constructor parameters.
Lets create xresnet50.

mc.expansion = 4 mc.layers = [3,4,6,3]

Now we can look at model body and if we call constructor - we have pytorch model!

#collapse_output mc.body
Output details ...
Sequential( (l_0): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (id_conv): ConvBnAct( (conv): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_2): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) ) (l_1): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_2): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_3): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) ) (l_2): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_2): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_3): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_4): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_5): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) ) (l_3): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_2): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) ) ) 

More modification.

Main purpose of this module - fast and easy modify model. And here is the link to more modification to beat Imagenette leaderboard with add MaxBlurPool and modification to ResBlock https://github.com/ayasyrev/imagenette_experiments/blob/master/ResnetTrick_create_model_fit.ipynb

But now lets create model as mxresnet50 from fastai forums tread https://forums.fast.ai/t/how-we-beat-the-5-epoch-imagewoof-leaderboard-score-some-new-techniques-to-consider

Lets create mxresnet constructor.

mc = ModelConstructor(name='MxResNet')

Then lets modify stem.

mc.stem_sizes = [3,32,64,64]

Now lets change activation function to Mish. Here is link to forum discussion https://forums.fast.ai/t/meet-mish-new-activation-function-possible-successor-to-relu
We'v got Mish is in model_constructor.activations, but from pytorch 1.9 take it from torch:

# from model_constructor.activations import Mish from torch.nn import Mish
mc.act_fn = Mish()
mc
MxResNet constructor in_chans: 3, num_classes: 1000 expansion: 1, groups: 1, dw: False, div_groups: None sa: False, se: False stem sizes: [3, 32, 64, 64], stride on 0 body sizes [64, 128, 256, 512] layers: [2, 2, 2, 2] 
#collapse_output mc()
Output details ...
Sequential( MxResNet (stem): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish() ) (conv_1): ConvBnAct( (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish() ) (conv_2): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish() ) (stem_pool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) ) (body): Sequential( (l_0): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish() ) (conv_1): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish() ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish() ) (conv_1): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish() ) ) (l_1): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish() ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish() ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish() ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish() ) ) (l_2): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish() ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish() ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish() ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish() ) ) (l_3): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish() ) (conv_1): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish() ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish() ) (conv_1): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish() ) ) ) (head): Sequential( (pool): AdaptiveAvgPool2d(output_size=1) (flat): Flatten(start_dim=1, end_dim=-1) (fc): Linear(in_features=512, out_features=1000, bias=True) ) ) 

MXResNet50

Now lets make MxResNet50

mc.expansion = 4 mc.layers = [3,4,6,3] mc.name = 'mxresnet50'

Now we have mxresnet50 constructor.
We can inspect every parts of it.
And after call it we got model.

mc
mxresnet50 constructor in_chans: 3, num_classes: 1000 expansion: 4, groups: 1, dw: False, div_groups: None sa: False, se: False stem sizes: [3, 32, 64, 64], stride on 0 body sizes [64, 128, 256, 512] layers: [3, 4, 6, 3] 
#collapse_output mc.stem.conv_1
Output details ...
ConvBnAct( (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish() ) 
#collapse_output mc.body.l_0.bl_0
Output details ...
ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish() ) (conv_1): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish() ) (conv_2): ConvBnAct( (conv): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (id_conv): ConvBnAct( (conv): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish() ) 

We can get model direct way:

mc = ModelConstructor(name="MxResNet", act_fn=Mish(), layers=[3,4,6,3], expansion=4, stem_sizes=[32,64,64]) model = mc()

YaResNet

Now lets change Resblock to YaResBlock (Yet another ResNet, former NewResBlock) is in lib from version 0.1.0

from model_constructor.yaresnet import YaResBlock
mc.block = YaResBlock

That all. Now we have YaResNet constructor

#collapse_output mc.name = 'YaResNet' mc
Output details ...
YaResNet constructor in_chans: 3, num_classes: 1000 expansion: 4, groups: 1, dw: False, div_groups: None sa: False, se: False stem sizes: [3, 32, 64, 64], stride on 0 body sizes [64, 128, 256, 512] layers: [3, 4, 6, 3] 

Let see what we have.

#collapse_output mc.body.l_1.bl_0
Output details ...
YaResBlock( (reduce): AvgPool2d(kernel_size=2, stride=2, padding=0) (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish() ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish() ) (conv_2): ConvBnAct( (conv): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): ConvBnAct( (conv): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (merge): Mish() ) 

First version

First version, it deprecated, but still here for compatibility.

from model_constructor.net import Net
mc = Net()
mc
Net constructor c_in: 3, c_out: 1000 expansion: 1, groups: 1, dw: False, div_groups: None sa: False, se: False stem sizes: [3, 32, 32, 64], stride on 0 body sizes [64, 128, 256, 512] layers: [2, 2, 2, 2] 

About

Constructor for pytorch models.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •