Skip to content

Commit 348272e

Browse files
Revert "[invoke_subgraph][fake tensor] Add finalizer on subgraph instead of the functionalize ctx wrapper (pytorch#151633)"
This reverts commit 02dd096. Reverted pytorch#151633 on behalf of https://github.com/wdvr due to reverting confusing ghstack state ([comment](pytorch#151633 (comment)))
1 parent 2ab752d commit 348272e

File tree

1 file changed

+1
-3
lines changed

1 file changed

+1
-3
lines changed

torch/_subclasses/fake_tensor.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1633,9 +1633,7 @@ def _prep_args_for_hash(
16331633
# Special case for AOT Dispatcher first pass, where the fake
16341634
# tensor is called on the functional wrapper of the subgraph.
16351635
result.append(hash(arg))
1636-
# functional wrapper is destroyed after fake tensor prop. We
1637-
# need to put the finalizer on the subgraph.
1638-
id_hashed_objects.append(arg.subgraph)
1636+
id_hashed_objects.append(arg)
16391637
else:
16401638
# It's important to capture the type of the arg since, e.g., 1 and 1.0
16411639
# hash to the same value, but can produce different dtypes for the

0 commit comments

Comments
 (0)