*Memos:
- My post explains Step function, Identity and ReLU.
- My post explains Leaky ReLU, PReLU and FReLU.
- My post explains heaviside() and Identity().
- My post explains PReLU() and ELU().
- My post explains SELU() and CELU().
- My post explains GELU() and Mish().
- My post explains SiLU() and Softplus().
- My post explains Tanh() and Softsign().
- My post explains Sigmoid() and Softmax().
ReLU() can get the 0D or more D tensor of the zero or more values computed by ReLU function from the 0D or more D tensor of zero or more elements as shown below:
*Memos:
- The 1st argument for initialization is
inplace
(Optional-Default:False
-Type:bool
): *Memos:- It does in-place operation.
- Keep it
False
because it's problematic withTrue
.
- The 1st argument is
input
(Required-Type:tensor
ofint
orfloat
). - You can also use relu() with a tensor.
import torch from torch import nn my_tensor = torch.tensor([8, -3, 0, 1, 5, -2, -1, 4]) relu = nn.ReLU() relu(input=my_tensor) my_tensor.relu() # tensor([8, 0, 0, 1, 5, 0, 0, 4]) relu # ReLU() relu.inplace # False relu = nn.ReLU(inplace=True) relu(input=my_tensor) # tensor([8, 0, 0, 1, 5, 0, 0, 4]) my_tensor = torch.tensor([[8, -3, 0, 1], [5, 0, -1, 4]]) relu = nn.ReLU() relu(input=my_tensor) # tensor([[8, 0, 0, 1], # [5, 0, 0, 4]]) my_tensor = torch.tensor([[[8, -3], [0, 1]], [[5, 0], [-1, 4]]]) relu = nn.ReLU() relu(input=my_tensor) # tensor([[[8, 0], [0, 1]], # [[5, 0], [0, 4]]]) my_tensor = torch.tensor([[[8., -3.], [0., 1.]], [[5., 0.], [-1., 4.]]]) relu = nn.ReLU() relu(input=my_tensor) # tensor([[[8., 0.], [0., 1.]], # [[5., 0.], [0., 4.]]])
LeakyReLU() can get the 0D or more D tensor of the zero or more values computed by LeakyReLU function from the 0D or more D tensor of zero or more elements as shown below:
*Memos:
- The 1st argument for initialization is
negative_slope
(Optional-Default:0.01
-Type:float
). *It's applied to negative input values. - The 2nd argument for initialization is
inplace
(Optional-Default:False
-Type:bool
): *Memos:- It does in-place operation.
- Keep it
False
because it's problematic withTrue
.
- The 1st argument is
input
(Required-Type:tensor
offloat
).
import torch from torch import nn my_tensor = torch.tensor([8., -3., 0., 1., 5., -2., -1., 4.]) lrelu = nn.LeakyReLU() lrelu(input=my_tensor) # tensor([8.0000, -0.0300, 0.0000, 1.0000, 5.0000, -0.0200, -0.0100, 4.0000]) lrelu # LeakyReLU(negative_slope=0.01) lrelu.negative_slope # 0.01 lrelu.inplace # False lrelu = nn.LeakyReLU(negative_slope=0.01, inplace=True) lrelu(input=my_tensor) # tensor([8.0000, -0.0300, 0.0000, 1.0000, 5.0000, -0.0200, -0.0100, 4.0000]) my_tensor = torch.tensor([[8., -3., 0., 1.], [5., -2., -1., 4.]]) lrelu = nn.LeakyReLU() lrelu(input=my_tensor) # tensor([[8.0000, -0.0300, 0.0000, 1.0000], # [5.0000, -0.0200, -0.0100, 4.0000]]) my_tensor = torch.tensor([[[8., -3.], [0., 1.]], [[5., -2.], [-1., 4.]]]) lrelu = nn.LeakyReLU() lrelu(input=my_tensor) # tensor([[[8.0000, -0.0300], [0.0000, 1.0000]], # [[5.0000, -0.0200], [-0.0100, 4.0000]]])
Top comments (0)