Skip to content

Commit 782925d

Browse files
enkileeLigoml
andauthored
fix english docs typo errors (#48599)
* fix english docs typo errors the errors in docs as same as chinese pr 5468 * update docs; test=docs_preview Co-authored-by: Ligoml <39876205+Ligoml@users.noreply.github.com>
1 parent a4d9851 commit 782925d

File tree

1 file changed

+34
-43
lines changed

1 file changed

+34
-43
lines changed

python/paddle/vision/ops.py

Lines changed: 34 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -160,14 +160,14 @@ def yolo_loss(
160160
downsample_ratio (int): The downsample ratio from network input to YOLOv3
161161
loss input, so 32, 16, 8 should be set for the
162162
first, second, and thrid YOLOv3 loss operators.
163-
name (string): The default value is None. Normally there is no need
164-
for user to set this property. For more information,
165-
please refer to :ref:`api_guide_Name`
166-
gt_score (Tensor): mixup score of ground truth boxes, should be in shape
163+
gt_score (Tensor, optional): mixup score of ground truth boxes, should be in shape
167164
of [N, B]. Default None.
168-
use_label_smooth (bool): Whether to use label smooth. Default True.
169-
scale_x_y (float): Scale the center point of decoded bounding box.
170-
Default 1.0
165+
use_label_smooth (bool, optional): Whether to use label smooth. Default True.
166+
name (str, optional): The default value is None. Normally there is no need
167+
for user to set this property. For more information,
168+
please refer to :ref:`api_guide_Name`
169+
scale_x_y (float, optional): Scale the center point of decoded bounding box.
170+
Default 1.0.
171171
172172
Returns:
173173
Tensor: A 1-D tensor with shape [N], the value of yolov3 loss
@@ -340,14 +340,6 @@ def yolo_box(
340340
score_{pred} = score_{conf} * score_{class}
341341
$$
342342
343-
where the confidence scores follow the formula bellow
344-
345-
.. math::
346-
347-
score_{conf} = \begin{case}
348-
obj, \text{if } iou_aware == false \\
349-
obj^{1 - iou_aware_factor} * iou^{iou_aware_factor}, \text{otherwise}
350-
\end{case}
351343
352344
Args:
353345
x (Tensor): The input tensor of YoloBox operator is a 4-D tensor with
@@ -369,15 +361,14 @@ def yolo_box(
369361
:attr:`yolo_box` operator input, so 32, 16, 8
370362
should be set for the first, second, and thrid
371363
:attr:`yolo_box` layer.
372-
clip_bbox (bool): Whether clip output bonding box in :attr:`img_size`
364+
clip_bbox (bool, optional): Whether clip output bonding box in :attr:`img_size`
373365
boundary. Default true.
374-
scale_x_y (float): Scale the center point of decoded bounding box.
375-
Default 1.0
376-
name (string): The default value is None. Normally there is no need
377-
for user to set this property. For more information,
378-
please refer to :ref:`api_guide_Name`
379-
iou_aware (bool): Whether use iou aware. Default false
380-
iou_aware_factor (float): iou aware factor. Default 0.5
366+
name (str, optional): The default value is None. Normally there is no need
367+
for user to set this property. For more information,
368+
please refer to :ref:`api_guide_Name`.
369+
scale_x_y (float, optional): Scale the center point of decoded bounding box. Default 1.0
370+
iou_aware (bool, optional): Whether use iou aware. Default false.
371+
iou_aware_factor (float, optional): iou aware factor. Default 0.5.
381372
382373
Returns:
383374
Tensor: A 3-D tensor with shape [N, M, 4], the coordinates of boxes,
@@ -902,8 +893,8 @@ def deform_conv2d(
902893
903894
.. math::
904895
905-
H_{out}&= \\frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]} + 1 \\\\
906-
W_{out}&= \\frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1
896+
H_{out}&= \frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]} + 1 \\
897+
W_{out}&= \frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1
907898
908899
Args:
909900
x (Tensor): The input image with [N, C, H, W] format. A Tensor with type
@@ -913,31 +904,31 @@ def deform_conv2d(
913904
weight (Tensor): The convolution kernel with shape [M, C/g, kH, kW], where M is
914905
the number of output channels, g is the number of groups, kH is the filter's
915906
height, kW is the filter's width.
916-
bias (Tensor, optional): The bias with shape [M,].
907+
bias (Tensor, optional): The bias with shape [M,]. Default: None.
917908
stride (int|list|tuple, optional): The stride size. If stride is a list/tuple, it must
918909
contain two integers, (stride_H, stride_W). Otherwise, the
919-
stride_H = stride_W = stride. Default: stride = 1.
910+
stride_H = stride_W = stride. Default: 1.
920911
padding (int|list|tuple, optional): The padding size. If padding is a list/tuple, it must
921912
contain two integers, (padding_H, padding_W). Otherwise, the
922-
padding_H = padding_W = padding. Default: padding = 0.
913+
padding_H = padding_W = padding. Default: 0.
923914
dilation (int|list|tuple, optional): The dilation size. If dilation is a list/tuple, it must
924915
contain two integers, (dilation_H, dilation_W). Otherwise, the
925-
dilation_H = dilation_W = dilation. Default: dilation = 1.
916+
dilation_H = dilation_W = dilation. Default: 1.
926917
deformable_groups (int): The number of deformable group partitions.
927-
Default: deformable_groups = 1.
918+
Default: 1.
928919
groups (int, optonal): The groups number of the deformable conv layer. According to
929920
grouped convolution in Alex Krizhevsky's Deep CNN paper: when group=2,
930921
the first half of the filters is only connected to the first half
931922
of the input channels, while the second half of the filters is only
932-
connected to the second half of the input channels. Default: groups=1.
923+
connected to the second half of the input channels. Default: 1.
933924
mask (Tensor, optional): The input mask of deformable convolution layer.
934925
A Tensor with type float32, float64. It should be None when you use
935-
deformable convolution v1.
926+
deformable convolution v1. Default: None.
936927
name(str, optional): For details, please refer to :ref:`api_guide_Name`.
937928
Generally, no setting is required. Default: None.
938929
Returns:
939-
Tensor: The tensor variable storing the deformable convolution \
940-
result. A Tensor with type float32, float64.
930+
Tensor: 4-D Tensor storing the deformable convolution result.\
931+
A Tensor with type float32, float64.
941932
942933
Examples:
943934
.. code-block:: python
@@ -1145,7 +1136,7 @@ class DeformConv2D(Layer):
11451136
dilation(int|list|tuple, optional): The dilation size. If dilation is a list/tuple, it must
11461137
contain three integers, (dilation_D, dilation_H, dilation_W). Otherwise, the
11471138
dilation_D = dilation_H = dilation_W = dilation. The default value is 1.
1148-
deformable_groups (int): The number of deformable group partitions.
1139+
deformable_groups (int, optional): The number of deformable group partitions.
11491140
Default: deformable_groups = 1.
11501141
groups(int, optional): The groups number of the Conv3D Layer. According to grouped
11511142
convolution in Alex Krizhevsky's Deep CNN paper: when group=2,
@@ -1504,7 +1495,7 @@ def decode_jpeg(x, mode='unchanged', name=None):
15041495
Args:
15051496
x (Tensor): A one dimensional uint8 tensor containing the raw bytes
15061497
of the JPEG image.
1507-
mode (str): The read mode used for optionally converting the image.
1498+
mode (str, optional): The read mode used for optionally converting the image.
15081499
Default: 'unchanged'.
15091500
name (str, optional): The default value is None. Normally there is no
15101501
need for user to set this property. For more information, please
@@ -1694,10 +1685,10 @@ def roi_pool(x, boxes, boxes_num, output_size, spatial_scale=1.0, name=None):
16941685
2D-Tensor with the shape of [num_boxes,4].
16951686
Given as [[x1, y1, x2, y2], ...], (x1, y1) is the top left coordinates,
16961687
and (x2, y2) is the bottom right coordinates.
1697-
boxes_num (Tensor): the number of RoIs in each image, data type is int32. Default: None
1688+
boxes_num (Tensor): the number of RoIs in each image, data type is int32.
16981689
output_size (int or tuple[int, int]): the pooled output size(h, w), data type is int32. If int, h and w are both equal to output_size.
1699-
spatial_scale (float, optional): multiplicative spatial scale factor to translate ROI coords from their input scale to the scale used when pooling. Default: 1.0
1700-
name(str, optional): for detailed information, please refer to :ref:`api_guide_Name`. Usually name is no need to set and None by default.
1690+
spatial_scale (float, optional): multiplicative spatial scale factor to translate ROI coords from their input scale to the scale used when pooling. Default: 1.0.
1691+
name(str, optional): for detailed information, please refer to :ref:`api_guide_Name`. Usually name is no need to set and None by default. Default: None.
17011692
17021693
Returns:
17031694
pool_out (Tensor): the pooled feature, 4D-Tensor with the shape of [num_boxes, C, output_size[0], output_size[1]].
@@ -1871,10 +1862,10 @@ def roi_align(
18711862
Default: True.
18721863
name(str, optional): For detailed information, please refer to :
18731864
ref:`api_guide_Name`. Usually name is no need to set and None by
1874-
default.
1865+
default. Default: None.
18751866
18761867
Returns:
1877-
The output of ROIAlignOp is a 4-D tensor with shape (num_boxes,
1868+
The output of ROIAlignOp is a 4-D tensor with shape (num_boxes,\
18781869
channels, pooled_h, pooled_w). The data type is float32 or float64.
18791870
18801871
Examples:
@@ -1971,10 +1962,10 @@ class RoIAlign(Layer):
19711962
data type is int32. If int, h and w are both equal to output_size.
19721963
spatial_scale (float32, optional): Multiplicative spatial scale factor
19731964
to translate ROI coords from their input scale to the scale used
1974-
when pooling. Default: 1.0
1965+
when pooling. Default: 1.0.
19751966
19761967
Returns:
1977-
The output of ROIAlign operator is a 4-D tensor with
1968+
The output of ROIAlign operator is a 4-D tensor with \
19781969
shape (num_boxes, channels, pooled_h, pooled_w).
19791970
19801971
Examples:

0 commit comments

Comments
 (0)