-
Couldn't load subscription status.
- Fork 560
[Distributed] Make xm.all_gather a single graph in Dynamo #4922
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
10 commits Select commit Hold shift + click to select a range
0272c70 Make all_gather single dynamo graph
alanwaketan 83f6a16 Fix a typo
alanwaketan 3aa4ebb Fix V3-8
alanwaketan a432f99 Fix linters
alanwaketan f6ff59f Cache world_size and ordinal in pjrt
alanwaketan e1629cc skip tpu < v4 support
alanwaketan da01f7f Fix linters
alanwaketan 28a2a6e Fix comments
alanwaketan b3b4bd5 Rearrange the test
alanwaketan 26cfb00 Fix a typo
alanwaketan File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| | @@ -9,6 +9,7 @@ | |
| import torch.nn.functional as F | ||
| import torch_xla | ||
| from torch_xla.experimental import pjrt | ||
| from torch_xla.experimental import tpu | ||
| import torch_xla.core.xla_env_vars as xenv | ||
| import torch_xla.debug.metrics_saver as ms | ||
| import torch_xla.utils.utils as xu | ||
| | @@ -26,6 +27,27 @@ | |
| _DEVICE_CONTEXTS = dict() | ||
| _DEVICE_CONTEXTS_LOCK = threading.Lock() | ||
| | ||
| # Note [Dynamo WORLD_SIEZ and ORDINAL] | ||
| # Belows are workaround to cache the ordinal and world_size such that | ||
| # Dynamo won't do graph breaks when xm.xrt_world_size() and xm.get_ordinal() are called. | ||
| _WORLD_SIZE = None | ||
| _ORDINAL = None | ||
| | ||
| | ||
| def _init_world_size_ordinal(): | ||
| global _WORLD_SIZE, _ORDINAL | ||
| | ||
| if not pjrt.using_pjrt(): | ||
| return | ||
| | ||
| # We don't support V3-8. See Note [V3-8 Threading] | ||
| if pjrt.device_type() == 'TPU' and tpu.version() < 4: | ||
| return | ||
| | ||
| if _WORLD_SIZE is None: | ||
| _WORLD_SIZE = xrt_world_size() | ||
| _ORDINAL = get_ordinal() | ||
| | ||
| | ||
| class DeviceContext(object): | ||
| | ||
| | @@ -90,6 +112,10 @@ def xrt_world_size(defval=1): | |
| Returns: | ||
| The number of devices which is taking part of the replication. | ||
| """ | ||
| global _WORLD_SIZE | ||
| if _WORLD_SIZE is not None: | ||
| return _WORLD_SIZE | ||
| | ||
| if pjrt.using_pjrt(): | ||
| return pjrt.world_size() | ||
| | ||
| | @@ -109,6 +135,10 @@ def get_ordinal(defval=0): | |
| Returns: | ||
| The replication ordinal of the current thread. | ||
| """ | ||
| global _ORDINAL | ||
| if _ORDINAL is not None: | ||
| return _ORDINAL | ||
| | ||
| if pjrt.using_pjrt(): | ||
| return pjrt.global_ordinal() | ||
| | ||
| | @@ -533,8 +563,7 @@ def all_gather(value, dim=0, groups=None, output=None, pin_layout=True): | |
| A tensor which has, in the ``dim`` dimension, all the values from the | ||
| participating replicas. | ||
| """ | ||
| if pin_layout and xla_device_hw( | ||
| value.device) in ('TPU', 'GPU', 'XPU') and output == None: | ||
| There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we had it because CPU was not supported at some point. Do you need to remove it because it will break dynamo? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yea. | ||
| if pin_layout and output == None: | ||
| # There is not an easy way to pin the all_gather layout on TPU and GPU, use | ||
| # all_reduce based all_gather for this purpose. | ||
| return _all_gather_using_all_reduce( | ||
| | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.