You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,23 +18,23 @@ pip install torchvision
18
18
19
19
## Tutorials
20
20
21
-
* 1 - [Multilayer Perceptron](https://github.com/bentrevett/pytorch-image-classification/blob/master/1%20-%20Multilayer%20Perceptron.ipynb)[](https://colab.research.google.com/github/bentrevett/pytorch-image-classification/blob/master/1%20-%20Multilayer%20Perceptron.ipynb)
21
+
* 1 - [Multilayer Perceptron](https://github.com/bentrevett/pytorch-image-classification/blob/master/1_mlp.ipynb)[](https://colab.research.google.com/github/bentrevett/pytorch-image-classification/blob/master/1_mlp.ipynb)
22
22
23
23
This tutorial provides an introduction to PyTorch and TorchVision. We'll learn how to: load datasets, augment data, define a multilayer perceptron (MLP), train a model, view the outputs of our model, visualize the model's representations, and view the weights of the model. The experiments will be carried out on the MNIST dataset - a set of 28x28 handwritten grayscale digits.
24
24
25
-
* 2 - [LeNet](https://github.com/bentrevett/pytorch-image-classification/blob/master/2%20-%20LeNet.ipynb)[](https://colab.research.google.com/github/bentrevett/pytorch-image-classification/blob/master/2%20-%20LeNet.ipynb)
25
+
* 2 - [LeNet](https://github.com/bentrevett/pytorch-image-classification/blob/master/2_lenet.ipynb)[](https://colab.research.google.com/github/bentrevett/pytorch-image-classification/blob/master/2_lenet.ipynb)
26
26
27
27
In this tutorial we'll implement the classic [LeNet](http://yann.lecun.com/exdb/lenet/) architecture. We'll look into convolutional neural networks and how convolutional layers and subsampling (aka pooling) layers work.
28
28
29
-
* 3 - [AlexNet](https://github.com/bentrevett/pytorch-image-classification/blob/master/3%20-%20AlexNet.ipynb)[](https://colab.research.google.com/github/bentrevett/pytorch-image-classification/blob/master/3%20-%20AlexNet.ipynb)
29
+
* 3 - [AlexNet](https://github.com/bentrevett/pytorch-image-classification/blob/master/3_alexnet.ipynb)[](https://colab.research.google.com/github/bentrevett/pytorch-image-classification/blob/master/3_alexnet.ipynb)
30
30
31
31
In this tutorial we will implement [AlexNet](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf), the convolutional neural network architecture that helped start the current interest in deep learning. We will move on to the CIFAR10 dataset - 32x32 color images in ten classes. We show: how to define architectures using `nn.Sequential`, how to initialize the parameters of your neural network, and how to use the learning rate finder to determine a good initial learning rate.
32
32
33
-
* 4 - [VGG](https://github.com/bentrevett/pytorch-image-classification/blob/master/4%20-%20VGG.ipynb)[](https://colab.research.google.com/github/bentrevett/pytorch-image-classification/blob/master/4%20-%20VGG.ipynb)
33
+
* 4 - [VGG](https://github.com/bentrevett/pytorch-image-classification/blob/master/4_vgg.ipynb)[](https://colab.research.google.com/github/bentrevett/pytorch-image-classification/blob/master/4_vgg.ipynb)
34
34
35
35
This tutorial will cover implementing the [VGG](https://arxiv.org/abs/1409.1556) model. However, instead of training the model from scratch we will instead load a VGG model pre-trained on the [ImageNet](http://www.image-net.org/challenges/LSVRC/) dataset and show how to perform transfer learning to adapt its weights to the CIFAR10 dataset using a technique called discriminative fine-tuning. We'll also explain how adaptive pooling layers and batch normalization works.
36
36
37
-
* 5 - [ResNet](https://github.com/bentrevett/pytorch-image-classification/blob/master/5%20-%20ResNet.ipynb)[](https://colab.research.google.com/github/bentrevett/pytorch-image-classification/blob/master/5%20-%20ResNet.ipynb)
37
+
* 5 - [ResNet](https://github.com/bentrevett/pytorch-image-classification/blob/master/5_resnet.ipynb)[](https://colab.research.google.com/github/bentrevett/pytorch-image-classification/blob/master/5_resnet.ipynb)
38
38
39
39
In this tutorial we will be implementing the [ResNet](https://arxiv.org/abs/1512.03385) model. We'll show how to load your own dataset, using the [CUB200](http://www.vision.caltech.edu/visipedia/CUB-200-2011.html) dataset as an example, and also how to use learning rate schedulers which dynamically alter the learning rate of your model whilst training. Specifially, we'll use the one cycle policy introduced in [this](https://arxiv.org/abs/1803.09820) paper and is now starting to be commonly used for training computer vision models.
0 commit comments