Skip to content

Commit 7b4681a

Browse files
authored
Remove BETA status for v2 transforms (#8111)
1 parent 0e2a5ae commit 7b4681a

29 files changed

+96
-226
lines changed

docs/source/beta_status.py

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -12,18 +12,8 @@ def run(self):
1212
return [self.node("", nodes.paragraph("", "", nodes.Text(text)))]
1313

1414

15-
class V2BetaStatus(BetaStatus):
16-
text = (
17-
"The {api_name} is in Beta stage, and while we do not expect disruptive breaking changes, "
18-
"some APIs may slightly change according to user feedback. Please submit any feedback you may have "
19-
"in this issue: https://github.com/pytorch/vision/issues/6753."
20-
)
21-
node = nodes.note
22-
23-
2415
def setup(app):
2516
app.add_directive("betastatus", BetaStatus)
26-
app.add_directive("v2betastatus", V2BetaStatus)
2717
return {
2818
"version": "0.1",
2919
"parallel_read_safe": True,

docs/source/transforms.rst

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -126,13 +126,6 @@ you're already using tranforms from ``torchvision.transforms``, all you need to
126126
do to is to update the import to ``torchvision.transforms.v2``. In terms of
127127
output, there might be negligible differences due to implementation differences.
128128

129-
.. note::
130-
131-
The v2 transforms are still BETA, but at this point we do not expect
132-
disruptive changes to be made to their public APIs. We're planning to make
133-
them fully stable in version 0.17. Please submit any feedback you may have
134-
`here <https://github.com/pytorch/vision/issues/6753>`_.
135-
136129
.. _transforms_perf:
137130

138131
Performance considerations

torchvision/transforms/v2/_augment.py

Lines changed: 3 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,7 @@
1515

1616

1717
class RandomErasing(_RandomApplyTransform):
18-
"""[BETA] Randomly select a rectangle region in the input image or video and erase its pixels.
19-
20-
.. v2betastatus:: RandomErasing transform
18+
"""Randomly select a rectangle region in the input image or video and erase its pixels.
2119
2220
This transform does not support PIL Image.
2321
'Random Erasing Data Augmentation' by Zhong et al. See https://arxiv.org/abs/1708.04896
@@ -207,9 +205,7 @@ def _mixup_label(self, label: torch.Tensor, *, lam: float) -> torch.Tensor:
207205

208206

209207
class MixUp(_BaseMixUpCutMix):
210-
"""[BETA] Apply MixUp to the provided batch of images and labels.
211-
212-
.. v2betastatus:: MixUp transform
208+
"""Apply MixUp to the provided batch of images and labels.
213209
214210
Paper: `mixup: Beyond Empirical Risk Minimization <https://arxiv.org/abs/1710.09412>`_.
215211
@@ -256,9 +252,7 @@ def _transform(self, inpt: Any, params: Dict[str, Any]) -> Any:
256252

257253

258254
class CutMix(_BaseMixUpCutMix):
259-
"""[BETA] Apply CutMix to the provided batch of images and labels.
260-
261-
.. v2betastatus:: CutMix transform
255+
"""Apply CutMix to the provided batch of images and labels.
262256
263257
Paper: `CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features
264258
<https://arxiv.org/abs/1905.04899>`_.

torchvision/transforms/v2/_auto_augment.py

Lines changed: 4 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -174,11 +174,9 @@ def _apply_image_or_video_transform(
174174

175175

176176
class AutoAugment(_AutoAugmentBase):
177-
r"""[BETA] AutoAugment data augmentation method based on
177+
r"""AutoAugment data augmentation method based on
178178
`"AutoAugment: Learning Augmentation Strategies from Data" <https://arxiv.org/pdf/1805.09501.pdf>`_.
179179
180-
.. v2betastatus:: AutoAugment transform
181-
182180
This transformation works on images and videos only.
183181
184182
If the input is :class:`torch.Tensor`, it should be of type ``torch.uint8``, and it is expected
@@ -350,12 +348,10 @@ def forward(self, *inputs: Any) -> Any:
350348

351349

352350
class RandAugment(_AutoAugmentBase):
353-
r"""[BETA] RandAugment data augmentation method based on
351+
r"""RandAugment data augmentation method based on
354352
`"RandAugment: Practical automated data augmentation with a reduced search space"
355353
<https://arxiv.org/abs/1909.13719>`_.
356354
357-
.. v2betastatus:: RandAugment transform
358-
359355
This transformation works on images and videos only.
360356
361357
If the input is :class:`torch.Tensor`, it should be of type ``torch.uint8``, and it is expected
@@ -434,11 +430,9 @@ def forward(self, *inputs: Any) -> Any:
434430

435431

436432
class TrivialAugmentWide(_AutoAugmentBase):
437-
r"""[BETA] Dataset-independent data-augmentation with TrivialAugment Wide, as described in
433+
r"""Dataset-independent data-augmentation with TrivialAugment Wide, as described in
438434
`"TrivialAugment: Tuning-free Yet State-of-the-Art Data Augmentation" <https://arxiv.org/abs/2103.10158>`_.
439435
440-
.. v2betastatus:: TrivialAugmentWide transform
441-
442436
This transformation works on images and videos only.
443437
444438
If the input is :class:`torch.Tensor`, it should be of type ``torch.uint8``, and it is expected
@@ -505,11 +499,9 @@ def forward(self, *inputs: Any) -> Any:
505499

506500

507501
class AugMix(_AutoAugmentBase):
508-
r"""[BETA] AugMix data augmentation method based on
502+
r"""AugMix data augmentation method based on
509503
`"AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty" <https://arxiv.org/abs/1912.02781>`_.
510504
511-
.. v2betastatus:: AugMix transform
512-
513505
This transformation works on images and videos only.
514506
515507
If the input is :class:`torch.Tensor`, it should be of type ``torch.uint8``, and it is expected

torchvision/transforms/v2/_color.py

Lines changed: 11 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,7 @@
1010

1111

1212
class Grayscale(Transform):
13-
"""[BETA] Convert images or videos to grayscale.
14-
15-
.. v2betastatus:: Grayscale transform
13+
"""Convert images or videos to grayscale.
1614
1715
If the input is a :class:`torch.Tensor`, it is expected
1816
to have [..., 3 or 1, H, W] shape, where ... means an arbitrary number of leading dimensions
@@ -32,9 +30,7 @@ def _transform(self, inpt: Any, params: Dict[str, Any]) -> Any:
3230

3331

3432
class RandomGrayscale(_RandomApplyTransform):
35-
"""[BETA] Randomly convert image or videos to grayscale with a probability of p (default 0.1).
36-
37-
.. v2betastatus:: RandomGrayscale transform
33+
"""Randomly convert image or videos to grayscale with a probability of p (default 0.1).
3834
3935
If the input is a :class:`torch.Tensor`, it is expected to have [..., 3 or 1, H, W] shape,
4036
where ... means an arbitrary number of leading dimensions
@@ -59,9 +55,7 @@ def _transform(self, inpt: Any, params: Dict[str, Any]) -> Any:
5955

6056

6157
class ColorJitter(Transform):
62-
"""[BETA] Randomly change the brightness, contrast, saturation and hue of an image or video.
63-
64-
.. v2betastatus:: ColorJitter transform
58+
"""Randomly change the brightness, contrast, saturation and hue of an image or video.
6559
6660
If the input is a :class:`torch.Tensor`, it is expected
6761
to have [..., 1 or 3, H, W] shape, where ... means an arbitrary number of leading dimensions.
@@ -163,10 +157,7 @@ def _transform(self, inpt: Any, params: Dict[str, Any]) -> Any:
163157

164158

165159
class RandomChannelPermutation(Transform):
166-
"""[BETA] Randomly permute the channels of an image or video
167-
168-
.. v2betastatus:: RandomChannelPermutation transform
169-
"""
160+
"""Randomly permute the channels of an image or video"""
170161

171162
def _get_params(self, flat_inputs: List[Any]) -> Dict[str, Any]:
172163
num_channels, *_ = query_chw(flat_inputs)
@@ -177,11 +168,9 @@ def _transform(self, inpt: Any, params: Dict[str, Any]) -> Any:
177168

178169

179170
class RandomPhotometricDistort(Transform):
180-
"""[BETA] Randomly distorts the image or video as used in `SSD: Single Shot
171+
"""Randomly distorts the image or video as used in `SSD: Single Shot
181172
MultiBox Detector <https://arxiv.org/abs/1512.02325>`_.
182173
183-
.. v2betastatus:: RandomPhotometricDistort transform
184-
185174
This transform relies on :class:`~torchvision.transforms.v2.ColorJitter`
186175
under the hood to adjust the contrast, saturation, hue, brightness, and also
187176
randomly permutes channels.
@@ -249,9 +238,7 @@ def _transform(self, inpt: Any, params: Dict[str, Any]) -> Any:
249238

250239

251240
class RandomEqualize(_RandomApplyTransform):
252-
"""[BETA] Equalize the histogram of the given image or video with a given probability.
253-
254-
.. v2betastatus:: RandomEqualize transform
241+
"""Equalize the histogram of the given image or video with a given probability.
255242
256243
If the input is a :class:`torch.Tensor`, it is expected
257244
to have [..., 1 or 3, H, W] shape, where ... means an arbitrary number of leading dimensions.
@@ -268,9 +255,7 @@ def _transform(self, inpt: Any, params: Dict[str, Any]) -> Any:
268255

269256

270257
class RandomInvert(_RandomApplyTransform):
271-
"""[BETA] Inverts the colors of the given image or video with a given probability.
272-
273-
.. v2betastatus:: RandomInvert transform
258+
"""Inverts the colors of the given image or video with a given probability.
274259
275260
If img is a Tensor, it is expected to be in [..., 1 or 3, H, W] format,
276261
where ... means it can have an arbitrary number of leading dimensions.
@@ -287,11 +272,9 @@ def _transform(self, inpt: Any, params: Dict[str, Any]) -> Any:
287272

288273

289274
class RandomPosterize(_RandomApplyTransform):
290-
"""[BETA] Posterize the image or video with a given probability by reducing the
275+
"""Posterize the image or video with a given probability by reducing the
291276
number of bits for each color channel.
292277
293-
.. v2betastatus:: RandomPosterize transform
294-
295278
If the input is a :class:`torch.Tensor`, it should be of type torch.uint8,
296279
and it is expected to have [..., 1 or 3, H, W] shape, where ... means an arbitrary number of leading dimensions.
297280
If img is PIL Image, it is expected to be in mode "L" or "RGB".
@@ -312,11 +295,9 @@ def _transform(self, inpt: Any, params: Dict[str, Any]) -> Any:
312295

313296

314297
class RandomSolarize(_RandomApplyTransform):
315-
"""[BETA] Solarize the image or video with a given probability by inverting all pixel
298+
"""Solarize the image or video with a given probability by inverting all pixel
316299
values above a threshold.
317300
318-
.. v2betastatus:: RandomSolarize transform
319-
320301
If img is a Tensor, it is expected to be in [..., 1 or 3, H, W] format,
321302
where ... means it can have an arbitrary number of leading dimensions.
322303
If img is PIL Image, it is expected to be in mode "L" or "RGB".
@@ -342,9 +323,7 @@ def _transform(self, inpt: Any, params: Dict[str, Any]) -> Any:
342323

343324

344325
class RandomAutocontrast(_RandomApplyTransform):
345-
"""[BETA] Autocontrast the pixels of the given image or video with a given probability.
346-
347-
.. v2betastatus:: RandomAutocontrast transform
326+
"""Autocontrast the pixels of the given image or video with a given probability.
348327
349328
If the input is a :class:`torch.Tensor`, it is expected
350329
to have [..., 1 or 3, H, W] shape, where ... means an arbitrary number of leading dimensions.
@@ -361,9 +340,7 @@ def _transform(self, inpt: Any, params: Dict[str, Any]) -> Any:
361340

362341

363342
class RandomAdjustSharpness(_RandomApplyTransform):
364-
"""[BETA] Adjust the sharpness of the image or video with a given probability.
365-
366-
.. v2betastatus:: RandomAdjustSharpness transform
343+
"""Adjust the sharpness of the image or video with a given probability.
367344
368345
If the input is a :class:`torch.Tensor`,
369346
it is expected to have [..., 1 or 3, H, W] shape, where ... means an arbitrary number of leading dimensions.

torchvision/transforms/v2/_container.py

Lines changed: 4 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,7 @@
88

99

1010
class Compose(Transform):
11-
"""[BETA] Composes several transforms together.
12-
13-
.. v2betastatus:: Compose transform
11+
"""Composes several transforms together.
1412
1513
This transform does not support torchscript.
1614
Please, see the note below.
@@ -62,9 +60,7 @@ def extra_repr(self) -> str:
6260

6361

6462
class RandomApply(Transform):
65-
"""[BETA] Apply randomly a list of transformations with a given probability.
66-
67-
.. v2betastatus:: RandomApply transform
63+
"""Apply randomly a list of transformations with a given probability.
6864
6965
.. note::
7066
In order to script the transformation, please use ``torch.nn.ModuleList`` as input instead of list/tuple of
@@ -118,9 +114,7 @@ def extra_repr(self) -> str:
118114

119115

120116
class RandomChoice(Transform):
121-
"""[BETA] Apply single transformation randomly picked from a list.
122-
123-
.. v2betastatus:: RandomChoice transform
117+
"""Apply single transformation randomly picked from a list.
124118
125119
This transform does not support torchscript.
126120
@@ -157,9 +151,7 @@ def forward(self, *inputs: Any) -> Any:
157151

158152

159153
class RandomOrder(Transform):
160-
"""[BETA] Apply a list of transformations in a random order.
161-
162-
.. v2betastatus:: RandomOrder transform
154+
"""Apply a list of transformations in a random order.
163155
164156
This transform does not support torchscript.
165157

torchvision/transforms/v2/_deprecated.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,12 +10,10 @@
1010

1111

1212
class ToTensor(Transform):
13-
"""[BETA] [DEPRECATED] Use ``v2.Compose([v2.ToImage(), v2.ToDtype(torch.float32, scale=True)])`` instead.
13+
"""[DEPRECATED] Use ``v2.Compose([v2.ToImage(), v2.ToDtype(torch.float32, scale=True)])`` instead.
1414
1515
Convert a PIL Image or ndarray to tensor and scale the values accordingly.
1616
17-
.. v2betastatus:: ToTensor transform
18-
1917
.. warning::
2018
:class:`v2.ToTensor` is deprecated and will be removed in a future release.
2119
Please use instead ``v2.Compose([v2.ToImage(), v2.ToDtype(torch.float32, scale=True)])``.

0 commit comments

Comments
 (0)