DEV Community

Super Kai (Kazuya Ito)
Super Kai (Kazuya Ito)

Posted on • Edited on

clamp in PyTorch

Buy Me a Coffee

clamp() can get the 0D or more D tensor of zero or more elements from the 0D or more D tensor of zero or more elements, bounded between min and max as shown below:

*Memos:

  • clamp() can be used with torch or a tensor.
  • The 1st argument(input) with torch or using a tensor(Required-Type:tensor of int, float or bool).
  • The 2nd argument with torch or the 1st argument is min(Optional-Type:scalar of int or float or tensor of int, float or bool).
  • The 3rd argument with torch or the 1st argument is max(Optional-Type:scalar of int or float or tensor of int, float or bool).
  • There is out argument with torch(Optional-Default:None-Type:tensor): *Memos:
    • out= must be used.
    • My post explains out argument.
  • The combination of min and max cannot be a scalar and tensor and vice versa and both None.
  • The combination of min and max cannot be both tensors(bool) but a tensor(bool) and None and vice versa is possible.
  • If a min is greater than a max value, the max value is set regardless of the value of an input tensor.
import torch my_tensor = torch.tensor([0., 1., 2., 3., 4., 5., 6., 7.]) torch.clamp(input=my_tensor, min=2., max=5.) my_tensor.clamp(min=2., max=5.) torch.clamp(input=my_tensor, min=torch.tensor(2.), max=torch.tensor(5.)) torch.clamp(input=my_tensor, min=torch.tensor([2., 2., 2., 2., 2., 2., 2., 2.]), max=torch.tensor([5., 5., 5., 5., 5., 5., 5., 5.])) torch.clamp(input=my_tensor, min=torch.tensor(2.), max=torch.tensor([5., 5., 5., 5., 5., 5., 5., 5.])) torch.clamp(input=my_tensor, min=torch.tensor([2., 2., 2., 2., 2., 2., 2., 2.]), max=torch.tensor(5.)) # tensor([2., 2., 2., 3., 4., 5., 5., 5.])  torch.clamp(input=my_tensor, min=2.) torch.clamp(input=my_tensor, min=torch.tensor(2.)) torch.clamp(input=my_tensor, min=torch.tensor([2., 2., 2., 2., 2., 2., 2., 2.])) # tensor([2., 2., 2., 3., 4., 5., 6., 7.])  torch.clamp(input=my_tensor, max=5.) torch.clamp(input=my_tensor, max=torch.tensor(5.)) torch.clamp(input=my_tensor, max=torch.tensor([5., 5., 5., 5., 5., 5., 5., 5.])) # tensor([0., 1., 2., 3., 4., 5., 5., 5.])  torch.clamp(input=my_tensor, min=5., max=2.) torch.clamp(input=my_tensor, min=torch.tensor(5.), max=torch.tensor(2.)) torch.clamp(input=my_tensor, min=torch.tensor([5., 5., 5., 5., 5., 5., 5., 5.]), max=torch.tensor([2., 2., 2., 2., 2., 2., 2., 2.])) # tensor([2., 2., 2., 2., 2., 2., 2., 2.])  torch.clamp(input=my_tensor, min=torch.tensor([2., 0., 2., 0., 2., 0., 2., 0.]), max=torch.tensor([0., 5., 0., 5., 0., 5., 0., 5.])) # tensor([0., 1., 0., 3., 0., 5., 0., 5.])  torch.clamp(input=my_tensor, min=torch.tensor([2., 0., 2., 0., 2., 0., 2., 0.])) # tensor([2., 1., 2., 3., 4., 5., 6., 7.])  torch.clamp(input=my_tensor, max=torch.tensor([0., 5., 0., 5., 0., 5., 0., 5.])) # tensor([0., 1., 0., 3., 0., 5., 0., 5.])  my_tensor = torch.tensor([[0., 1., 2., 3.], [4., 5., 6., 7.]]) torch.clamp(input=my_tensor, min=2., max=5.) torch.clamp(input=my_tensor, min=torch.tensor(2.), max=torch.tensor(5.)) torch.clamp(input=my_tensor, min=torch.tensor([2., 2., 2., 2.]), max=torch.tensor([5., 5., 5., 5.])) torch.clamp(input=my_tensor, min=torch.tensor(2.), max=torch.tensor([5., 5., 5., 5.])) torch.clamp(input=my_tensor, min=torch.tensor([2., 2., 2., 2.]), max=torch.tensor(5.)) # tensor([[2., 2., 2., 3.], # [4., 5., 5., 5.]])  my_tensor = torch.tensor([[0, 1, 2, 3], [4, 5, 6, 7]]) torch.clamp(input=my_tensor, min=2, max=5) torch.clamp(input=my_tensor, min=torch.tensor([2, 2, 2, 2]), max=torch.tensor([5, 5, 5, 5])) # tensor([[2., 2., 2., 3.], # [4., 5., 5., 5.]])  my_tensor = torch.tensor([[True, False, True, False], [False, True, False, True]]) torch.clamp(input=my_tensor, min=torch.tensor([False, True, False, True])) # tensor([[True, True, True, True], # [False, True, False, True]])  torch.clamp(input=my_tensor, max=torch.tensor([False, True, False, True])) # tensor([[False, False, False, False], # [False, True, False, True]]) 
Enter fullscreen mode Exit fullscreen mode

Top comments (0)