Skip to content

Conversation

@lsy323
Copy link
Collaborator

@lsy323 lsy323 commented Jan 17, 2023

This fixes #2703

Enable lowering for upsample_bilinear2d with scale factor

Added a new unit tests of upsample_bilinear2d with scale factor != 1. The test will fail with the current implementation, since it will fallback to Aten with scale factor != 1. The test passed after the lowering for upsample_bilinear2d with scale factor is supported.

@lsy323 lsy323 marked this pull request as ready for review January 17, 2023 18:03
const at::Tensor& self, at::IntArrayRef output_size, bool align_corners,
c10::optional<double> scales_h, c10::optional<double> scales_w) {
TORCH_LAZY_FN_COUNTER("xla::");
XLATensorPtr self_tensor = bridge::GetXlaTensor(self);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to make sure only one of (scales_h + scales_w ) and output_size is specified, otherwise I don't know which one we should believe. Ideally upsteram already does that check but we can make it more explict.

Copy link
Collaborator Author

@lsy323 lsy323 Jan 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After investigating using GDB, I found the output_size will always be filled by upstream (at: https://github.com/pytorch/pytorch/blob/88366a907549abdd7e2c402a961b60c2be910824/aten/src/ATen/native/UpSampleBilinear2d.cpp#L166)

With the current upstream implementation, we can rely on the output_size inferred from scales_h/w by upstream. However, I think we can keep the scale factor shape inference here to make it future-proof.

In the scale factor shape inference block, I think we can do a shape validation if output_size is not empty, to make sure the output shape and the inferred shape are the same.

@lsy323 lsy323 requested a review from JackCaoG January 18, 2023 06:11
@JackCaoG JackCaoG merged commit d74faaf into pytorch:master Jan 18, 2023
@JackCaoG
Copy link
Collaborator

For a sanity check, @lsy323 can you check if you use python api upsample_bilinear2d with scale factor, will it fall back to CPU(you can check that by looking at coutner before after the op)?

@lsy323
Copy link
Collaborator Author

lsy323 commented Jan 18, 2023

For a sanity check, @lsy323 can you check if you use python api upsample_bilinear2d with scale factor, will it fall back to CPU(you can check that by looking at coutner before after the op)?

Will do, thanks!

@lsy323
Copy link
Collaborator Author

lsy323 commented Jan 18, 2023

For a sanity check, @lsy323 can you check if you use python api upsample_bilinear2d with scale factor, will it fall back to CPU(you can check that by looking at coutner before after the op)?

Tested with the following Python script, upsample_bilinear2d with scale factor will be lowered instead of falling back to CPU

import torch import torch_xla.core.xla_model as xm import torch_xla.debug.metrics as met import unittest class UpsampleBilinear2DTest(unittest.TestCase): def test_upsample_bilinear2d(self): xla_device = xm.xla_device() t1 = torch.rand(2, 2, 5, 5, device=xla_device) upsample_bilinear2d = torch.nn.UpsamplingBilinear2d( scale_factor=(8.0 / 5.0, 8.0 / 5.0)) t2 = upsample_bilinear2d(t1) xm.mark_step() assert (met.counter_value("xla::upsample_bilinear2d") == 1) if __name__ == '__main__': test = unittest.main() sys.exit(0 if test.result.wasSuccessful() else 1) 
ManfeiBai pushed a commit that referenced this pull request Jan 19, 2023
* Enable lowering for upsample_bilinear2d with scale factor * fix linter * add shape validation
ManfeiBai pushed a commit that referenced this pull request Jan 19, 2023
* Enable lowering for upsample_bilinear2d with scale factor * fix linter * add shape validation
@lsy323 lsy323 deleted the merge-upsample-bilinear2d-with-scale branch March 20, 2024 02:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

2 participants