*Memos:
- My post explains Linear layer(Fully-connected Layer).
- My post explains manual_seed().
- My post explains requires_grad.
Linear() can get the 1D or more D tensor of the zero or more elements computed by Affine transformation from the 1D or more D tensor of zero or more elements as shown below:
*Memos:
- The 1st argument for initialization is
in_features
(Required-Type:float
orcomplex
). *It must be0 <= x
. - The 2nd argument for initialization is
out_features
(Required-Default:False
-Type:float
): *Memos:- It must be
0 <= x
. -
0
is possible but warning occurs.
- It must be
- The 3rd argument for initialization is
bias
(Optional-Default:True
-Type:bool
). *My post explainsbias
argument. - The 4th argument for initialization is
device
(Optional-Default:None
-Type:str
,int
or device()): *Memos:- If it's
None
, get_default_device() is used. *My post explainsget_default_device()
and set_default_device(). -
device=
can be omitted. - My post explains
device
argument.
- If it's
- The 5th argument for initialization is
dtype
(Optional-Default:None
-Type:dtype): *Memos:- If it's
None
, get_default_dtype() is used. *My post explainsget_default_dtype()
and set_default_dtype(). -
dtype=
can be omitted. - My post explains
dtype
argument.
- If it's
- The 1st argument is
input
(Required-Type:tensor
offloat
): *Memos:- It must be the 1D or more D tensor of zero or more elements.
- The number of the elements of the deepest dimension must be same as
in_features
. - Its
device
anddtype
must be same asLinear()
's. -
complex
must be set todtype
ofLinear()
to use acomplex
tensor. - The tensor's
requires_grad
which isFalse
by default is set toTrue
byLinear()
.
-
linear1.device
andlinear1.dtype
don't work.
import torch from torch import nn tensor1 = torch.tensor([8., -3., 0., 1., 5., -2.]) tensor1.requires_grad # False torch.manual_seed(42) linear1 = nn.Linear(in_features=6, out_features=3) tensor2 = linear1(input=tensor1) tensor2 # tensor([1.0529, -0.8833, 3.4542], grad_fn=<ViewBackward0>) tensor2.requires_grad # True linear1 # Linear(in_features=6, out_features=4, bias=True) linear1.in_features # 6 linear1.out_features # 3 linear1.bias # Parameter containing: # tensor([-0.1906, 0.1041, -0.1881], requires_grad=True) linear1.weight # Parameter containing: # tensor([[0.3121, 0.3388, -0.0956, 0.3750, -0.0894, 0.0824], # [-0.1988, 0.2398, 0.3599, -0.2995, 0.3548, 0.0764], # [0.3016, 0.0553, 0.1969, -0.0576, 0.3147, 0.0603]], # requires_grad=True) torch.manual_seed(42) linear2 = nn.Linear(in_features=3, out_features=3) linear2(input=tensor2) # tensor([-0.8493, 1.5744, 1.2707], grad_fn=<ViewBackward0>) torch.manual_seed(42) linear = nn.Linear(in_features=6, out_features=3, bias=True, device=None, dtype=None) linear(input=tensor1) # tensor([1.0529, -0.8833, 3.4542], grad_fn=<ViewBackward0>) my_tensor = torch.tensor([[8., -3., 0.], [1., 5., -2.]]) torch.manual_seed(42) linear = nn.Linear(in_features=3, out_features=3) linear(input=my_tensor) # tensor([[1.6701, 5.1242, -3.1578], # [2.6844, 0.1667, 0.5044]], grad_fn=<AddmmBackward0>) my_tensor = torch.tensor([[[8.], [-3.], [0.]], [[1.], [5.], [-2.]]]) torch.manual_seed(42) linear = nn.Linear(in_features=1, out_features=3) linear(input=my_tensor) # tensor([[[7.0349, 6.4210, -1.6724], # [-1.3750, -2.7091, 0.9046], # [0.9186, -0.2191, 0.2018]], # [[1.6831, 0.6109, -0.0325], # [4.7413, 3.9309, -0.9696], # [-0.6105, -1.8791, 0.6703]]], grad_fn=<ViewBackward0>) my_tensor = torch.tensor([[[8.+0.j], [-3.+0.j], [0.+0.j]], [[1.+0.j], [5.+0.j], [-2.+0.j]]]) torch.manual_seed(42) linear = nn.Linear(in_features=1, out_features=3, dtype=torch.complex64) linear(input=my_tensor) # tensor([[[5.6295+7.2273j, -0.9926+6.6153j, -0.8836+1.8015j], # [-2.7805-1.9027j, 1.5844-3.4895j, 1.5265-0.4182j], # [-0.4869+0.5873j, 0.8815-0.7336j, 0.8692+0.1872j]], # [[0.2777+1.4173j, 0.6473+0.1850j, 0.6501+0.3889j], # [3.3358+4.7373j, -0.2898+3.8594j, -0.2263+1.1961j], # [-2.0159-1.0727j, 1.3501-2.5709j, 1.3074-0.2164j]]], # grad_fn=<ViewBackward0>)
Top comments (0)