Visualization of convents in Pytorch - Python

Visualization of convents in Pytorch - Python

In deep learning, visualizing Convolutional Neural Networks (ConvNets or CNNs) can be helpful for understanding what the network has learned, especially in terms of feature maps and filters. One of the popular tools for visualizing CNNs in PyTorch is TensorBoard, which integrates well with PyTorch through the torch.utils.tensorboard module.

Below, we'll provide a basic example of training a simple CNN on the MNIST dataset and visualizing its filters and feature maps using TensorBoard.

  • First, you need to install necessary packages:
pip install torch torchvision tensorboard 
  • Code for training and visualization:
import torch import torchvision import torchvision.transforms as transforms from torch.utils.tensorboard import SummaryWriter import torch.nn as nn import torch.optim as optim # Define a simple CNN class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 6, 3) # 1 input channel, 6 output channels, 3x3 kernel self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 3) self.fc1 = nn.Linear(16 * 6 * 6, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 6 * 6) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x # Data loading transform = transforms.Compose([transforms.ToTensor()]) trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True) # Initialize the network, loss, and optimizer net = Net() criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) # TensorBoard setup writer = SummaryWriter() # Train the network and write to TensorBoard for epoch in range(5): for i, data in enumerate(trainloader, 0): inputs, labels = data optimizer.zero_grad() outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # Write to TensorBoard every 1000 mini-batches if i % 1000 == 0: writer.add_scalar('Loss', loss.item(), epoch * len(trainloader) + i) # Visualize feature maps writer.add_images('Feature maps', net.conv1(inputs), global_step=epoch * len(trainloader) + i) # Visualize filters (kernels) writer.add_images('Filters', net.conv1.weight.data, global_step=epoch * len(trainloader) + i) writer.close() 
  • Run TensorBoard:
tensorboard --logdir=runs 

Now, in TensorBoard, you should be able to see the loss, filters (kernels), and feature maps visualized for each epoch. Adjust the code according to your needs, especially if you're using a more complex model or a different dataset.


More Tags

android-navigation tensorboard pattern-matching transpose newsletter class-method inputstreamreader microcontroller sonarqube-scan readonly-attribute

More Programming Guides

Other Guides

More Programming Examples