*Memos:
- My post explains Leaky ReLU, PReLU and FReLU.
- My post explains ELU, SELU and CELU.
- My post explains heaviside() and Identity().
- My post explains ReLU() and LeakyReLU().
- My post explains SELU() and CELU().
- My post explains GELU() and Mish().
- My post explains SiLU() and Softplus().
- My post explains Tanh() and Softsign().
- My post explains Sigmoid() and Softmax().
PReLU() can get the 1D or more D tensor of the zero or more values computed by PReLU function from the 1D or more D tensor of zero or more elements as shown below:
*Memos:
- The 1st argument for initialization is
num_parameters
(Optional-Default:1
-Type:int
): *Memos:- It's the number of learnable parameters.
- It must be
1 <= x
.
- The 2nd argument for initialization is
init
(Optional-Default:0.25
-Type:float
). *It's the initial value for learnable parameters. - The 3rd argument is
device
(Optional-Default:None
-Type:str
,int
or device()): *Memos:- If it's
None
, get_default_device() is used. *My post explainsget_default_device()
and set_default_device(). -
device=
can be omitted. - My post explains
device
argument.
- If it's
- The 4th argument is
dtype
(Optional-Default:None
-Type:dtype): *Memos:- If it's
None
, get_default_dtype() is used. *My post explainsget_default_dtype()
and set_default_dtype(). -
dtype=
can be omitted. - My post explains
dtype
argument.
- If it's
- The 1st argument is
input
(Required-Type:tensor
offloat
). - You can also use prelu() with a tensor.
import torch from torch import nn my_tensor = torch.tensor([8., -3., 0., 1., 5., -2., -1., 4.]) prelu = nn.PReLU() prelu(input=my_tensor) # tensor([8.0000, -0.7500, 0.0000, 1.0000, # 5.0000, -0.5000, -0.2500, 4.0000], # grad_fn=<PreluKernelBackward0>) my_tensor.prelu(weight=torch.tensor(0.25)) # tensor([8.0000, -0.7500, 0.0000, 1.0000, # 5.0000, -0.5000, -0.2500, 4.0000]) prelu # PReLU(num_parameters=1) prelu.num_parameters # 1 prelu.init # 0.25 prelu.weight # Parameter containing: # tensor([0.2500], requires_grad=True) prelu = nn.PReLU(num_parameters=1, init=0.25) prelu(input=my_tensor) # tensor([8.0000, -0.7500, 0.0000, 1.0000, # 5.0000, -0.5000, -0.2500, 4.0000], # grad_fn=<PreluKernelBackward0>) prelu.weight # Parameter containing: # tensor([0.2500], requires_grad=True) my_tensor = torch.tensor([[8., -3., 0., 1.], [5., -2., -1., 4.]]) prelu = nn.PReLU() prelu(input=my_tensor) # tensor([[8.0000, -0.7500, 0.0000, 1.0000], # [5.0000, -0.5000, -0.2500, 4.0000]], # grad_fn=<PreluKernelBackward0>) prelu.weight # Parameter containing: # tensor([0.2500], requires_grad=True) prelu = nn.PReLU(num_parameters=4, init=0.25, device=None, dtype=None) prelu(input=my_tensor) # tensor([[8.0000, -0.7500, 0.0000, 1.0000], # [5.0000, -0.5000, -0.2500, 4.0000]], # grad_fn=<PreluKernelBackward0>) prelu.weight # Parameter containing: # tensor([0.2500, 0.2500, 0.2500, 0.2500], requires_grad=True) my_tensor = torch.tensor([[[8., -3.], [0., 1.]], [[5., -2.], [-1., 4.]]]) prelu = nn.PReLU() prelu(input=my_tensor) # tensor([[[8.0000, -0.7500], [0.0000, 1.0000]], # [[5.0000, -0.5000], [-0.2500, 4.0000]]], # grad_fn=<PreluKernelBackward0>) prelu.weight # Parameter containing: # tensor([0.2500], requires_grad=True) prelu = nn.PReLU(num_parameters=2, init=0.25) prelu(input=my_tensor) # tensor([[[8.0000, -0.7500], [0.0000, 1.0000]], # [[5.0000, -0.5000], [-0.2500, 4.0000]]], # grad_fn=<PreluKernelBackward0>) prelu.weight # Parameter containing: # tensor([0.2500, 0.2500], requires_grad=True)
ELU() can get the 0D or more D tensor of the zero or more values computed by ELU function from the 0D or more D tensor of zero or more elements as shown below:
*Memos:
- The 1st argument for initialization is
alpha
(Optional-Default:1.0
-Type:float
). *It's applied to negative input values. - The 2nd argument for initialization is
inplace
(Optional-Default:False
-Type:bool
): *Memos:- It does in-place operation.
- Keep it
False
because it's problematic withTrue
.
- The 1st argument is
input
(Required-Type:tensor
offloat
).
import torch from torch import nn my_tensor = torch.tensor([8., -3., 0., 1., 5., -2., -1., 4.]) elu = nn.ELU() elu(input=my_tensor) # tensor([8.0000, -0.9502, 0.0000, 1.0000, 5.0000, -0.8647, -0.6321, 4.0000]) elu # ELU(alpha=1.0) elu.alpha # 1.0 elu.inplace # False elu = nn.ELU(alpha=1.0, inplace=True) elu(input=my_tensor) # tensor([8.0000, -0.9502, 0.0000, 1.0000, 5.0000, -0.8647, -0.6321, 4.0000]) my_tensor = torch.tensor([[8., -3., 0., 1.], [5., -2., -1., 4.]]) elu = nn.ELU() elu(input=my_tensor) # tensor([[8.0000, -0.9502, 0.0000, 1.0000], # [5.0000, -0.8647, -0.6321, 4.0000]]) my_tensor = torch.tensor([[[8., -3.], [0., 1.]], [[5., -2.], [-1., 4.]]]) elu = nn.ELU() elu(input=my_tensor) # tensor([[[8.0000, -0.9502], [0.0000, 1.0000]], # [[5.0000, -0.8647], [-0.6321, 4.0000]]])
Top comments (0)