Constructor to create pytorch model.
pip install model-constructor
Or install from repo:
pip install git+https://github.com/ayasyrev/model_constructor.git
First import constructor class, then create model constructor object.
Now you can change every part of model.
from model_constructor import ModelConstructormc = ModelConstructor()Check base parameters:
mcoutput
ModelConstructor in_chans: 3, num_classes: 1000 expansion: 1, groups: 1, dw: False, div_groups: None act_fn: ReLU, sa: False, se: False stem sizes: [32, 32, 64], stride on 0 body sizes [64, 128, 256, 512] layers: [2, 2, 2, 2]
Check all parameters with print_cfg method:
mc.print_cfg()output
ModelConstructor( in_chans=3 num_classes=1000 block='ResBlock' conv_layer='ConvBnAct' block_sizes=[64, 128, 256, 512] layers=[2, 2, 2, 2] norm='BatchNorm2d' act_fn='ReLU' pool="AvgPool2d {'kernel_size': 2, 'ceil_mode': True}" expansion=1 groups=1 bn_1st=True zero_bn=True stem_sizes=[32, 32, 64] stem_pool="MaxPool2d {'kernel_size': 3, 'stride': 2, 'padding': 1}" init_cnn='init_cnn' make_stem='make_stem' make_layer='make_layer' make_body='make_body' make_head='make_head') Now we have model constructor, default setting as xresnet18. And we can get model after call it.
model = mc() modeloutput
ModelConstructor( (stem): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (stem_pool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) ) (body): Sequential( (l_0): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) ) (l_1): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) ) (l_2): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) ) (l_3): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) ) ) (head): Sequential( (pool): AdaptiveAvgPool2d(output_size=1) (flat): Flatten(start_dim=1, end_dim=-1) (fc): Linear(in_features=512, out_features=1000, bias=True) ) )
If you want to change model, just change constructor parameters.
Lets create xresnet50.
mc.expansion = 4 mc.layers = [3,4,6,3]Now we can look at model parts - stem, body, head.
mc.bodyoutput
Sequential( (l_0): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (id_conv): ConvBnAct( (conv): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_2): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) ) (l_1): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_2): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_3): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) ) (l_2): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_2): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_3): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_4): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_5): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) ) (l_3): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) (bl_2): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): ReLU(inplace=True) ) ) )
Alternative we can create config first and than create constructor from it.
from model_constructor import ModelCfgcfg = ModelCfg() print(cfg)output
in_chans=3 num_classes=1000 block='ResBlock' conv_layer='ConvBnAct' block_sizes=[64, 128, 256, 512] layers=[2, 2, 2, 2] norm='BatchNorm2d' act_fn='ReLU' pool="AvgPool2d {'kernel_size': 2, 'ceil_mode': True}" expansion=1 groups=1 bn_1st=True zero_bn=True stem_sizes=[32, 32, 64] stem_pool="MaxPool2d {'kernel_size': 3, 'stride': 2, 'padding': 1}" init_cnn='init_cnn' make_stem='make_stem' make_layer='make_layer' make_body='make_body' make_head='make_head' Now we can create constructor from config:
mc = ModelConstructor.from_cfg(cfg) mcoutput
ModelConstructor in_chans: 3, num_classes: 1000 expansion: 1, groups: 1, dw: False, div_groups: None act_fn: ReLU, sa: False, se: False stem sizes: [32, 32, 64], stride on 0 body sizes [64, 128, 256, 512] layers: [2, 2, 2, 2]
Main purpose of this module - fast and easy modify model. And here is the link to more modification to beat Imagenette leaderboard with add MaxBlurPool and modification to ResBlock notebook
But now lets create model as mxresnet50 from fastai forums tread
Lets create mxresnet constructor.
mc = ModelConstructor(name='MxResNet')Then lets modify stem.
mc.stem_sizes = [3,32,64,64]Now lets change activation function to Mish. Here is link to forum discussion
We'v got Mish is in model_constructor.activations, but from pytorch 1.9 take it from torch:
# from model_constructor.activations import Mish from torch.nn import Mishmc.act_fn = Mishmcoutput
MxResNet in_chans: 3, num_classes: 1000 expansion: 1, groups: 1, dw: False, div_groups: None act_fn: Mish, sa: False, se: False stem sizes: [3, 32, 64, 64], stride on 0 body sizes [64, 128, 256, 512] layers: [2, 2, 2, 2]
Here is model:
mc()output
MxResNet( stem_sizes: [3, 32, 64, 64], act_fn: Mish (stem): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(3, 3, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish(inplace=True) ) (conv_3): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish(inplace=True) ) (stem_pool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) ) (body): Sequential( (l_0): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish(inplace=True) ) ) (l_1): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish(inplace=True) ) ) (l_2): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish(inplace=True) ) ) (l_3): Sequential( (bl_0): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) (id_conv): ConvBnAct( (conv): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish(inplace=True) ) (bl_1): ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish(inplace=True) ) ) ) (head): Sequential( (pool): AdaptiveAvgPool2d(output_size=1) (flat): Flatten(start_dim=1, end_dim=-1) (fc): Linear(in_features=512, out_features=1000, bias=True) ) )
Now lets make MxResNet50
mc.expansion = 4 mc.layers = [3,4,6,3] mc.name = 'mxresnet50'Now we have mxresnet50 constructor.
We can inspect every parts of it.
And after call it we got model.
mcoutput
mxresnet50 in_chans: 3, num_classes: 1000 expansion: 4, groups: 1, dw: False, div_groups: None act_fn: Mish, sa: False, se: False stem sizes: [3, 32, 64, 64], stride on 0 body sizes [64, 128, 256, 512] layers: [3, 4, 6, 3]
mc.stem.conv_1output
ConvBnAct( (conv): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish(inplace=True) )
mc.body.l_0.bl_0output
ResBlock( (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): Mish(inplace=True) ) (conv_2): ConvBnAct( (conv): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): Sequential( (id_conv): ConvBnAct( (conv): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (act_fn): Mish(inplace=True) )
We can get model direct way:
mc = ModelConstructor(name="MxResNet", act_fn=Mish, layers=[3,4,6,3], expansion=4, stem_sizes=[32,64,64]) model = mc()Or create with config:
mc = ModelConstructor.from_cfg( ModelCfg(name="MxResNet", act_fn=Mish, layers=[3,4,6,3], expansion=4, stem_sizes=[32,64,64]) ) model = mc()Now lets change Resblock to YaResBlock (Yet another ResNet, former NewResBlock) is in lib from version 0.1.0
from model_constructor.yaresnet import YaResBlockmc = ModelConstructor(name="YaResNet") mc.block = YaResBlockOr in one line:
mc = ModelConstructor(name="YaResNet", block=YaResBlock)That all. Now we have YaResNet constructor
mc.print_cfg()output
ModelConstructor( name='YaResNet' in_chans=3 num_classes=1000 block='YaResBlock' conv_layer='ConvBnAct' block_sizes=[64, 128, 256, 512] layers=[2, 2, 2, 2] norm='BatchNorm2d' act_fn='ReLU' pool="AvgPool2d {'kernel_size': 2, 'ceil_mode': True}" expansion=1 groups=1 bn_1st=True zero_bn=True stem_sizes=[32, 32, 64] stem_pool="MaxPool2d {'kernel_size': 3, 'stride': 2, 'padding': 1}" init_cnn='init_cnn' make_stem='make_stem' make_layer='make_layer' make_body='make_body' make_head='make_head') Let see what we have.
mc.body.l_1.bl_0output
YaResBlock( (reduce): AvgPool2d(kernel_size=2, stride=2, padding=0) (convs): Sequential( (conv_0): ConvBnAct( (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act_fn): ReLU(inplace=True) ) (conv_1): ConvBnAct( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (id_conv): ConvBnAct( (conv): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (merge): ReLU(inplace=True) )
Lets create Resnet34 like model constructor:
class YaResnet34(ModelConstructor): block: type[nn.Module] = YaResBlock layers: list[int] = [3, 4, 6, 3]mc = YaResnet34() mc.print_cfg()output
YaResnet34( in_chans=3 num_classes=1000 block='YaResBlock' conv_layer='ConvBnAct' block_sizes=[64, 128, 256, 512] layers=[3, 4, 6, 3] norm='BatchNorm2d' act_fn='ReLU' pool="AvgPool2d {'kernel_size': 2, 'ceil_mode': True}" expansion=1 groups=1 bn_1st=True zero_bn=True stem_sizes=[32, 32, 64] stem_pool="MaxPool2d {'kernel_size': 3, 'stride': 2, 'padding': 1}" init_cnn='init_cnn' make_stem='make_stem' make_layer='make_layer' make_body='make_body' make_head='make_head') And Resnet50 like model can be inherited from YaResnet34:
class YaResnet50(YaResnet34): expansion = 4mc = YaResnet50() mcoutput
YaResnet50 in_chans: 3, num_classes: 1000 expansion: 4, groups: 1, dw: False, div_groups: None act_fn: ReLU, sa: False, se: False stem sizes: [32, 32, 64], stride on 0 body sizes [64, 128, 256, 512] layers: [3, 4, 6, 3]