0% found this document useful (0 votes)
619 views8 pages

Tutorial Pytorch Best Commands

This document provides a cheat sheet for using PyTorch with examples of how to load and save models and tensors, use GPUs, export models to ONNX, and access pre-trained models and datasets for text, audio, vision, and more.

Uploaded by

Ronaldo Benitez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
619 views8 pages

Tutorial Pytorch Best Commands

This document provides a cheat sheet for using PyTorch with examples of how to load and save models and tensors, use GPUs, export models to ONNX, and access pre-trained models and datasets for text, audio, vision, and more.

Uploaded by

Ronaldo Benitez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

PyTorch Cheat Sheet

Using PyTorch 1.2, torchaudio 0.3, torchtext 0.4, and torchvision 0.4.

General PyTorch and model I/O [Link].check_model(model)

# loading PyTorch Pre-trained models and domain-specific utils


import torch
Audio
# cuda
import [Link] as tCuda # various functions and settings import torchaudio
[Link] = True # deterministic ML? # load and save audio
[Link] = False # deterministic ML? stream, sample_rate = [Link]('file')
[Link].is_available # check if cuda is is_available [Link]('file', stream, sample_rate)
[Link]() # moving tensor to gpu # 16 bit wav files only
[Link]() # moving tensor to cpu stream, sample_rate=torchaudio.load_wav('file')
[Link](device) # copy densor to device xyz
[Link]('cuda') # or 'cuda0', 'cuda1' if multiple devices # datasets (can be used with [Link])
[Link]('cpu') # default import [Link] as aDatasets
[Link]('folder_for_storage', download=True)
# static computation graph/C++ export preparation [Link]('folder_for_storage', download=True)
[Link]()
from [Link] import script, trace
@script # transforms
import [Link] as aTransforms
# load and save a model [Link]
[Link](model, 'model_file') [Link]
model = [Link]('model_file') [Link]
[Link]() # set to inference [Link]
[Link](model.state_dict(), 'model_file') # only state dict [Link]
model = ModelCalss() [Link]
model.load_state_dict([Link]('model_file') [Link]
[Link]
# save to onnx
[Link] # kaldi support
[Link].export_to_pretty_string import [Link] as aKaldi
import torchaudio.kaldi_io as aKaldiIO
# load onnx model [Link]
import onnx [Link]
model = [Link]('[Link]') [Link]
# check model aKaldi.resample_waveform
aKaldiIO.read_vec_int_ark [Link]
aKaldiIO.read_vec_flt_scp [Link]
aKaldiIO.read_vec_flt_ark [Link]
aKaldiIO.read_mat_scp [Link]
aKaldiIO.read_mat_ark [Link]
[Link]
# functional/direct function access
import [Link] as aFunctional # question classification
[Link]
# sox effects/passing data between Python and C++
import torchaudio.sox_effects as aSox_effects # entailment
[Link]
Text [Link]

import torchtext # language modeling


# various data-related function and classes tDatasets.WikiText2
import [Link] as tData tDatasets.WikiText103
[Link] [Link]
[Link]
[Link] # machine translation
[Link] [Link] # subclass
[Link] tDatasets.Multi30k
[Link] [Link]
[Link] tDatasets.WMT14
[Link]
[Link] # sequence tagging
[Link] [Link] # subclass
[Link] [Link]
[Link] tDatasets.CoNLL2000Chunking
[Link] # similar to vTransform and sklearn's pipeline
[Link] # function # question answering
[Link] # function tDatasets.BABI20

# vocabulary and pre-trained embeddings


# datasets import [Link] as tVocab
import [Link] as tDatasets [Link] # create a vocabulary
# sentiment analysis [Link] # create subvocabulary
[Link] [Link] # word vectors
[Link] [Link] # GloVe embeddings
[Link] # subclass of all datasets below [Link] # FastText embeddings
tDatasets.AG_NEWS [Link] # character n-gram
[Link]

2
Vision # classification
[Link](pretrained=True)
import torchvision vModels.densenet121()
# datasets vModels.densenet161()
import [Link] as vDatasets vModels.densenet169()
[Link] vModels.densenet201()
[Link] [Link]()
[Link] vModels.inception_v3()
[Link] vModels.mnasnet0_5()
[Link] vModels.mnasnet0_75()
[Link] # randomly generated images vModels.mnasnet1_0()
[Link] vModels.mnasnet1_3()
[Link] vModels.mobilenet_v2()
[Link] vModels.resnet18()
[Link] # data loader for a certain image folder structure vModels.resnet34()
[Link] # data loader for a certain folder structure vModels.resnet50()
[Link] vModels.resnet50_32x4d()
[Link] vModels.resnet101()
vDatasets.STL10 vModels.resnet101_32x8d()
[Link] vModels.resnet152()
[Link] vModels.wide_resnet50_2()
[Link] vModels.wide_resnet101_2()
[Link] vModels.shufflenet_v2_x0_5()
[Link] vModels.shufflenet_v2_x1_0()
[Link] vModels.shufflenet_v2_x1_5()
[Link] vModels.shufflenet_v2_x2_0()
[Link] vModels.squeezenet1_0()
vDatasets.Kinetics400 vModels.squeezenet1_1()
vDatasets.HMDB51 vModels.vgg11()
vDatasets.UCF101 vModels.vgg11_bn()
vModels.vgg13()
# video IO vModels.vgg13_bn()
import [Link] as vIO vModels.vgg16()
vIO.read_video('file', start_pts, end_pts) vModels.vgg16_bn()
vIO.write_video('file', video, fps, video_codec) vModels.vgg19()
[Link].save_image(image,'file') vModels.vgg19_bn()

# pretrained models/model architectures # semantic segmentation


import [Link] as vModels [Link].fcn_resnet50()
# models can be constructed with random weights () [Link].fcn_resnet101()
# or pretrained (pretrained=True) [Link].deeplabv3_resnet50()

3
[Link].deeplabv3_resnet101()
# transforms on torch tensors
# object and/or keypoint detection, instance segmentation [Link]
[Link].fasterrcnn_resnet50_fpn() [Link]
[Link].maskrcnn_resnet50_fpn() [Link]
[Link].keypointrcnn_resnet50_fpn()
# conversion
# video classification [Link]
[Link].r3d_18() [Link]
[Link].mc3_18()
[Link].r2plus1d_18() # direct access to transform functions
import [Link] as vTransformsF
# transforms
import [Link] as vTransforms # operators for computer vision
[Link](transforms) # chaining transforms # (not supported by TorchScript yet)
[Link](someLambdaFunction) import [Link] as vOps
[Link] # non-maximum suppression (NMS)
# transforms on PIL images vOps.roi_align # <=> [Link]
[Link](height,width) vOps.roi_pool # <=> [Link]
[Link](brightness=0, contrast=0,
saturation=0, hue=0)
[Link] Data loader
[Link]
[Link] # classes and functions to represent datasets
[Link](degrees, translate=None, from [Link] import Dataset, Dataloader
scale=None, shear=None,
resample=False, fillcolor=0)
[Link](transforms, p=0.5) Neural network
[Link](transforms)
[Link] import [Link] as nn
[Link]
[Link] Activation functions
[Link]
[Link] [Link]
[Link] [Link]
[Link] [Link]
[Link] [Link]
[Link] [Link]
[Link] [Link]
[Link] [Link]
[Link] [Link]
[Link]

4
[Link] optim.lr_scheduler.Scheduler
[Link]
nn.ReLU6 # optimizers
[Link](lower,upper) # sampled from uniform distribution [Link] # general optimizer classes
[Link] [Link]
[Link] [Link]
[Link] [Link]
nn.Softmax2d [Link] # adam with decoupled weight decay regularization
[Link] [Link]
[Link] [Link] # averged stochastic gradient descent
[Link] [Link]
[Link] [Link]
[Link] [Link]
[Link] [Link]
[Link] [Link] # for sparse tensors

Loss function # learning rate


optim.lr_scheduler
[Link] optim.lr_scheduler.LambdaLR
[Link] optim.lr_scheduler.StepLR
[Link] optim.lr_scheduler.MultiStepLR
[Link] optim.lr_scheduler.ExponentialLR
[Link] optim.lr_scheduler.CosineAnnealingLR
[Link] optim.lr_scheduler.ReduceLROnPlateau
[Link] optim.lr_scheduler.CyclicLR
nn.L1Loss
[Link] Pre-defined layers/deep learning
[Link]
[Link] # containers
[Link] [Link]{ ,List,Dict}
[Link] [Link]{List,Dict}
[Link] [Link]
nn.SmoothL1Loss
[Link] # linear layers
[Link] [Link]
[Link]
[Link]
Optimizer
import [Link] as optim # dropout layers
# general useage [Link]
scheduler = [Link](....) [Link]{ ,2d,3d}
[Link]() # step-wise

5
# convolutional layers Functional
[Link]{1,2,3}d
import [Link] as F
[Link]{1,2,3}d
# direct function access and not via classes ([Link]) ???
[Link]
[Link]
NumPy-like functions
# pooling
[Link]{1,2,3}d Loading PyTorch and tensor basics
[Link]{1,2,3}d
[Link]{1,2,3}d # loading PyTorch
[Link]{1,2,3}d import torch
[Link]{1,2,3}d
# defining a tensor
# recurrent layers [Link]((values))
[Link]
[Link] # define data type
[Link] [Link]((values), dtype=torch.int16)

# padding layers # converting a NumPy array to a PyTorch tensor


[Link]{1,2}d torch.from_numpy(numpyArray)
[Link]{1,2,3}d
[Link]{1,2,3}d # create a tensor of zeros
[Link]((shape))
# normalization layers torch.zeros_like(other_tensor)
[Link]{1,2,3}d
[Link]{1,2,3}d # create a tensor of ones
[Link]((shape))
# transformer layers torch.ones_like(other_tensor)
[Link]
[Link] # create an idenity matrix
[Link] [Link](numberOfRows)
[Link]
[Link] # create tensor with same values
[Link]((shape), value)
torch.full_like(other_tensor,value)

# create an empty tensor


Computational graph [Link]((shape))
torch.empty_like(other_tensor)
# various functions and classes to use and manipulate
# automaic differentiation and the computational graph # create sequences
import [Link] as autograd [Link](startNumber, endNumber, stepSize)

6
[Link](startNumber, endNumber, stepSize) torch.randn_like(other_tensor)
[Link](startNUmber, endNumber, stepSize)
# random permuation of integers
# concatenate tensors # range [0,n-1)
[Link]((tensors), axis) [Link]()

# split tensors into sub-tensors Math (element-wise)


[Link](tensor, splitSize)
# basic operations
# (un)squeeze tensor [Link](tensor)
[Link](tensor, dimension) [Link](tensor, tensor2) # or tensor+scalar
[Link](tensor, dim) [Link](tensor, tensor2) # or tensor/scalar
[Link](tensor,tensor2) # or tensor*scalar
# reshape tensor [Link](tensor, tensor2) # or tensor-scalar
[Link](tensor, shape) [Link](tensor)
[Link](tensor)
# transpose tensor [Link](tensor, devisor) #or [Link]()
torch.t(tensor) # 1D and 2D tensors [Link](tensor)
[Link](tensor, dim0, dim1)
# trigonometric functions
[Link](tensor)
Random numbers
[Link](tensor)
# set seed [Link](tensor)
torch.manual_seed(seed) torch.atan2(tensor)
[Link](tensor)
# generate a tensor with random numbers [Link](tensor)
# of interval [0,1) [Link](tensor)
[Link](size) [Link](tensor)
torch.rand_like(other_tensor) [Link](tensor)
[Link](tensor)
# generate a tensor with random integer numbers
# of interval [lowerInt,higherInt] # exponentials and logarithms
[Link](lowerInt, [Link](tensor)
higherInt, torch.expm1(tensor) # exp(input-1)
(tensor_shape)) [Link](tensor)
torch.randint_like(other_tensor, torch.log10(tensor)
lowerInt, torch.log1p(tensor) # log(1+input)
higherInt) torch.log2(tensor)

# generate a tensor of random numbers drawn # other


# from a normal distribution (mean=0, var=1) [Link](tensor) # error function
[Link]((size)) [Link](tensor) # inverse error function

7
[Link](tensor) # round to full integer [Link](tensor) # pseudo-inverse
[Link](tensor, power)
Other
Math (not element-wise) [Link](tensor)
[Link](tensor) [Link](tensor)
[Link](tensor) [Link](tensor, signal_dim)
[Link](tensor) [Link](tensor, signal_dim)
[Link](tensor) [Link](tensor, signal_dim)
[Link](tensor) [Link](tensor, signal_dim)
[Link](tensor) [Link](tensor, n_fft)
[Link](tensor, norm) [Link](tensor)
[Link](tensor) # product of all elements [Link](tensor)
[Link](tensor) [Link](tensor, start_dim)
[Link](tensor) torch.rot90(tensor)
[Link](tensor) [Link](tensor)
[Link](tensor) [Link](tensor)
[Link](tensor1,tensor2) [Link](tensor)
torch.cartesian_prod(tensor1, tensor2, ...)
[Link](equation,tensor)
[Link](tensor1,tensor2)
PyTorch C++
[Link](tensor) (aka libtorch)
torch.cholesky_torch(tensor)
[Link](tensor1, tensor2) // PyTorch header file(s)
[Link](tensor) #import <torch/script.h>
[Link](tensor)
[Link](tensor) torch::jit::script::Module module;

©Simon Wenkel ([Link]


This pdf is licensed under the CC BY-SA 4.0 license.

You might also like