Skip to content

Conversation

@sarahtranfb
Copy link
Contributor

Summary:
Noticed when flipping the flag, this test case failed:

https://www.internalfb.com/code/fbsource/[faf71541b1ec0fae639f82d487b81fb18ea3e523]/fbcode/pytorch/captum/tests/attr/test_dataloader_attr.py?lines=138%2C134

The ablated tensor was tensor([0]) instead of tensor([[0.1]) since the baseline was a float-type and the input tensors were int tensors.

https://www.internalfb.com/code/fbsource/[f2fcc926a6f3669602bac4d28c2d92e4197c96b9]/fbcode/pytorch/captum/captum/attr/_core/feature_ablation.py?lines=707-709

ablated_input is just a copy of the input_tensor, so during assignment, the ablated feature tensor incorrectly gets cast to an int tensor for this case.

Differential Revision: D81980219

@meta-cla meta-cla bot added the cla signed label Sep 9, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D81980219

… granular than input when cross tensor attribution is enabled (meta-pytorch#1644) Summary: Noticed when flipping the flag, this test case failed: https://www.internalfb.com/code/fbsource/[faf71541b1ec0fae639f82d487b81fb18ea3e523]/fbcode/pytorch/captum/tests/attr/test_dataloader_attr.py?lines=138%2C134 The ablated tensor was `tensor([0])` instead of `tensor([[0.1])` since the baseline was a float-type and the input tensors were int tensors. https://www.internalfb.com/code/fbsource/[f2fcc926a6f3669602bac4d28c2d92e4197c96b9]/fbcode/pytorch/captum/captum/attr/_core/feature_ablation.py?lines=707-709 `ablated_input` is just a copy of the `input_tensor`, so during assignment, the ablated feature tensor incorrectly gets cast to an int tensor for this case. Differential Revision: D81980219
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D81980219

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

2 participants