- Notifications
You must be signed in to change notification settings - Fork 6.6k
Add Unified Sequence Parallel attention #12693
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| It would be nice to get a testing script so that we can quickly check things. |
| I added a basic test script with a simple forward and backward op. Is it better to have a test script with flash_attention_backward and forward?? |
a244006 to 9dee8f8 Compare 9dee8f8 to 9ebcff5 Compare | Let us know if this is ready for a review! |
| Yep, ready for review! I tested it with a 4-process setup (2×2 mesh, on cpu) and everything checks out, shapes look good and gradients flow correctly. Looking forward for feedback and happy to address any issues. |
sayakpaul left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for getting started on this!
| grad_query, grad_key, grad_value = (x.to(grad_out.dtype) for x in (grad_query, grad_key, grad_value)) | ||
| | ||
| return grad_query, grad_key, grad_value, None, None, None, None, None, None, None, None | ||
| return grad_query, grad_key, grad_value, None, None, None, None, None, None, None, None, None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the change here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The forward function has 12 inputs (without ctx (context)) but the backward is giving 11 output. Normally the two should be the same. I was getting an error like this while testing: "RuntimeError: function backward returned an incorrect number of gradients (expected 12, got 11)".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have a reproducer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it can be reproduced in this notebook (it happens only during the backward): https://colab.research.google.com/drive/1Ac4nVSVjKHrPpcSRlX0E3NzY0mDEmkMx?usp=sharing
| I am trying with the following code: import torch from torch import distributed as dist from diffusers import AutoModel, DiffusionPipeline, ContextParallelConfig def setup_distributed(): if not dist.is_initialized(): dist.init_process_group(backend="nccl") device = torch.device(f"cuda:{dist.get_rank()}") torch.cuda.set_device(device) return device device = setup_distributed() # Need to add parallel support for this. # pipeline.transformer.set_attention_backend("flash_hub") pipeline = DiffusionPipeline.from_pretrained( "black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16, ).to(device) pipeline.transformer.set_attention_backend("_native_cudnn") pipeline.transformer.enable_parallelism( config=ContextParallelConfig(ulysses_degree=2, ring_degree=2) ) prompt = """ cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain """ generator = torch.Generator().manual_seed(42) image = pipeline(prompt, guidance_scale=3.5, num_inference_steps=50, generator=generator).images[0] if dist.get_rank() == 0: image.save("output_ua.png") if dist.is_initialized(): dist.destroy_process_group()Run the above with And it leads to: |
I spent quite some time investigating this issue but wasn’t able to find the cause. I tried to reproduce it, but the model is too large for the small GPUs I can use, and |
Oooh finally tracked it down and could reproduce it on cpu! The bug is in the That |
| I think that is perfect, I didn't know specific about torch 2.9. I will apply the diff. I will just do final test on lse on |
We need to add dedicated testing for CP x attention backends, anyway. So, we can skip for now. Sufficient documentation should suffice.
Sounds good! |
| The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
sayakpaul left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good! Let's also add docs and remove test file.
| raise ValueError("`ring_degree` and `ulysses_degree` must be greater than or equal to 1.") | ||
| if self.ring_degree > 1 and self.ulysses_degree > 1: | ||
| raise ValueError( | ||
| "Unified Ulysses-Ring attention is not yet supported. Please set either `ring_degree` or `ulysses_degree` to 1." | ||
| ) | ||
| if self.rotate_method != "allgather": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔥
| @bot /style |
| Style bot fixed some files and pushed the changes. |
| Okay I will add the docs and then remove the test file. |
bug fixes, lse calculation - switched to _all_to_all_single helper in _all_to_all_dim_exchange due contiguity issues bug fix bug fix bug fix
7b7b1f4 to e681fe4 Compare | Oups! so sorry for the force push. Just resolved a conflict in the distributed_inference.md in docs. |



What does this PR do?
This is a draft implementation of the Unified SP attention approach.
_all_to_all_dim_exchangewith custom scatter and gather indicesTemplatedUnifiedAttentionCore implementation complete, needs: