-
Couldn't load subscription status.
- Fork 560
lower replication_pad3d and replication_pad3d_backward #6566
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| CI failed due to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @ManfeiBai! I didn't know we can directly re-use the existing replication pad lowering logic, but it seems like we can. Nice!
Also why did we lower the _backward variant? Is it required?
| | ||
| XLATensorPtr replication_pad3d(const XLATensorPtr& input, | ||
| std::vector<int64_t> padding); | ||
| XLATensorPtr replication_pad3d_backward(const XLATensorPtr& grad_output, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: should add empty newline here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, Wonjoo, according to https://github.com/pytorch/xla/blob/0192ff75324d51d748d76f7717bbccabc15d1db8/torch_xla/csrc/tensor_methods.h#L729C1-L745C71, do we want to keep the same style like 1d and 2d without empty newline here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, then let's just keep it as is. Thanks!
yes, we don't have test in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM, thanks! Feel free to include the newline change with another PR, don't want to block this PR for a nit comment.
follow up Reenable PR for #6537 and #6554
passed local test:
print metric locally: https://gist.github.com/ManfeiBai/e661eab6fae8a10a1828369a2a016b8e