- Notifications
You must be signed in to change notification settings - Fork 3.6k
Enable support for Intel XPU devices (AKA Intel GPUs) #19443
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
…ed a bit, mpi environment seems to be broken
…broadcasting strings isn't working. This commit includes a workaround for that case.
Syncronize xpu devices
Add xpu warning
Include XPU in on-gpu check.
Include XPU in map location
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
…rride decorator in line with other accelerators.
for more information, see https://pre-commit.ci
| Hi @coreyjadams , there is a long standing PR for XPU support from us - #17700 which we are planning to integrate soon. We are already in discussions regarding this and would appreciate using the branch for the time being until this gets merged. Please also feel free to set up an offline discussion with us ( I work with Venkat /Sam and others regarding LLMs from Intel) |
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
| Hello, could you provide at least one simple example with distributed training on Intel GPUs? I have such hardware and would like to try this PR. Thanks! |
You can move forward to conversations here, there is an RWKV example. |
| This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. If you need further help see our docs: https://lightning.ai/docs/pytorch/latest/generated/CONTRIBUTING.html#pull-request or ask the assistance of a core contributor here or on Discord. Thank you for your contributions. |
What does this PR do?
This PR extends pytorch_lighting with support for Intel GPUs, as enabled with
intel_extension_for_pytorch. With Intel's module, pytorch gains thetorch.xpumodule which is equivalent totorch.cuda.Throughout the pytorch_lightning repository, in places where
cudais explicitly mentioned I tried to include equivalent functionality forxpu. In some cases, I declined to extend support toxpuwhere I was not sure it would work / be worth it: for example, there isBitsAndByteswhich I know very little about, and I decided not to addxpu. The main enablements areXPUAcceleratorand including logic to managexpus in pytorch DDP.In the distributed case, instead of
ncclIntel provides thecclbackend for collective communications. There is a known bug that I encountered when testing, if one calls torch.distributed.broadcast with a list of strings it will induce a hang. I currently wrapped that call with an explicit check against this which isn't ideal, but it does enable DDP in XPUs.Both
xpuandcclare currently extensions to pytorch and must be loaded dynamically.torch.xpuis available withimport intel_extension_for_pytorchand thecclbackend totorch.distributedbecomes available when one doesimport oneccl_bindings_for_pytorch. Because of this, I have in many cases done one of these:xpuis initialized, I use it freely.torch.distributed.initialize, since the target backend is available, I intercept and ensure the oneccl bindings are loaded.torch.xpuand can't be sure its available, I have included logic analogous to cuda: instead ofif torch.cuda.is_available(): ...I doif hasattr(torch, "xpu") and torch.xpu.is_available(): ...This PR was not intended to introduce any breaking changes.
I think this PR needs some discussion before we even ask "should it be merged":
📚 Documentation preview 📚: https://pytorch-lightning--19443.org.readthedocs.build/en/19443/