*Memos:
- My post explains ELU, SELU and CELU.
- My post explains heaviside() and Identity().
- My post explains ReLU() and LeakyReLU().
- My post explains PReLU() and ELU().
- My post explains GELU() and Mish().
- My post explains SiLU() and Softplus().
- My post explains Tanh() and Softsign().
- My post explains Sigmoid() and Softmax().
SELU() can get the 0D or more D tensor of the zero or more values computed by SELU function from the 0D or more D tensor of zero or more elements as shown below:
*Memos:
- The 1st argument for initialization is
inplace
(Optional-Default:False
-Type:bool
): *Memos:- It does in-place operation.
- Keep it
False
because it's problematic withTrue
.
- The 1st argument is
input
(Required-Type:tensor
offloat
).
import torch from torch import nn my_tensor = torch.tensor([8., -3., 0., 1., 5., -2., -1., 4.]) selu = nn.SELU() selu(input=my_tensor) # tensor([8.4056, -1.6706, 0.0000, 1.0507, 5.2535, -1.5202, -1.1113, 4.2028]) selu # SELU() selu.inplace # False selu = nn.SELU(inplace=True) selu(input=my_tensor) # tensor([8.4056, -1.6706, 0.0000, 1.0507, 5.2535, -1.5202, -1.1113, 4.2028]) my_tensor = torch.tensor([[8., -3., 0., 1.], [5., -2., -1., 4.]]) selu = nn.SELU() selu(input=my_tensor) # tensor([[8.4056, -1.6706, 0.0000, 1.0507], # [5.2535, -1.5202, -1.1113, 4.2028]]) my_tensor = torch.tensor([[[8., -3.], [0., 1.]], [[5., -2.], [-1., 4.]]]) selu = nn.SELU() selu(input=my_tensor) # tensor([[[8.4056, -1.6706], [0.0000, 1.0507]], # [[5.2535, -1.5202], [-1.1113, 4.2028]]])
CELU() can get the 0D or more D tensor of the zero or more values computed by CELU function from the 0D or more D tensor of zero or more elements as shown below:
*Memos:
- The 1st argument for initialization is
alpha
(Optional-Default:1.0
-Type:float
). *It's applied to negative input values. - The 2nd argument for initialization is
inplace
(Optional-Default:False
-Type:bool
): *Memos:- It does in-place operation.
- Keep it
False
because it's problematic withTrue
.
- The 1st argument is
input
(Required-Type:tensor
offloat
).
import torch from torch import nn my_tensor = torch.tensor([8., -3., 0., 1., 5., -2., -1., 4.]) celu = nn.CELU() celu(input=my_tensor) # tensor([8.0000, -0.9502, 0.0000, 1.0000, 5.0000, -0.8647, -0.6321, 4.0000]) celu # CELU(alpha=1.0) celu.alpha # 1.0 celu.inplace # False celu = nn.CELU(alpha=1.0, inplace=True) celu(input=my_tensor) # tensor([8.0000, -0.9502, 0.0000, 1.0000, 5.0000, -0.8647, -0.6321, 4.0000]) my_tensor = torch.tensor([[8., -3., 0., 1.], [5., -2., -1., 4.]]) celu = nn.CELU() celu(input=my_tensor) # tensor([[8.0000, -0.9502, 0.0000, 1.0000], # [5.0000, -0.8647, -0.6321, 4.0000]]) my_tensor = torch.tensor([[[8., -3.], [0., 1.]], [[5., -2.], [-1., 4.]]]) celu = nn.CELU() celu(input=my_tensor) # tensor([[[8.0000, -0.9502], [0.0000, 1.0000]], # [[5.0000, -0.8647], [-0.6321, 4.0000]]])
Top comments (0)