Skip to content

Conversation

@zhwesky2010
Copy link
Contributor

@zhwesky2010 zhwesky2010 commented Apr 21, 2023

PR types

New features

PR changes

APIs

Description

Pcard-66984

为distributed all_gather/scatter/all_to_all 支持0D Tennsor输入

相当于 #49279 的静态图分支实现。

@paddle-bot
Copy link

paddle-bot bot commented Apr 21, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot
Copy link

paddle-bot bot commented Apr 21, 2023

❌ The PR is not created using PR's template. You can refer to this Demo.
Please use PR's template, it helps save our maintainers' time so that more developers get helped.

@zhwesky2010 zhwesky2010 changed the title [Zero-Dim] update distributed scatter/all_to_all for support 0D tensor [Zero-Dim] distributed scatter/all_to_all support input 0D tensor Apr 21, 2023
LiYuRio
LiYuRio previously approved these changes Apr 21, 2023
Copy link
Contributor

@LiYuRio LiYuRio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zhwesky2010 zhwesky2010 force-pushed the 0d_dist_scatter branch 3 times, most recently from 4d58d16 to 69ee1cf Compare April 24, 2023 14:36
@zhwesky2010 zhwesky2010 changed the title [Zero-Dim] distributed scatter/all_to_all support input 0D tensor [Zero-Dim] distributed all_gather/scatter/all_to_all support input 0D tensor Apr 26, 2023
Copy link
Contributor

@LiYuRio LiYuRio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

int64_t send_numel = in->numel();
const T* send_buff = in->data<T>();
T* recv_buff = out->data<T>();
T* recv_buff = out->mutable_data<T>(place);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这样out的大小对吗,allgather的out是in的nranks倍

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

infoflow 2023-04-26 20-22-19
是的,原来的nranks倍,在infershape里已经做了out的dim设置。目前是op计算时又重复设置了一次,所以去掉了

Copy link
Contributor

@From00 From00 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for use mutable_data

@zhwesky2010 zhwesky2010 merged commit 0b6dd53 into PaddlePaddle:develop Apr 26, 2023
@paddle-bot
Copy link

paddle-bot bot commented Apr 26, 2023

你的PR已合入Paddle库,请关注后续测试结果。
Your PR has been merged into the repository. An official integration test will be conducted later. Stay tuned.

XiaoguangHu01 pushed a commit that referenced this pull request May 9, 2023
#53601) * [Zero-Dim] fix functool.reduce more safe with intial value, to support empty list (#53182) * [Zero-Dim] support 0d tensor for shape and squeeze onednn kernel (#52832) * support 0d tensor for shape and squeeze onednn kernel * set python api for shape op ut * [Zero-Dim] distributed scatter/all_to_all support input 0D tensor (#53186) * [Zero-Dim] Support paddle.sum/mean/loss api output 0D,test=allcase (#52739) * [CINN Support 0D-Tensor] CINN supports 0D-Tensor with trick temporarily (#53382) * [CINN Support 0D-Tensor] CINN supports 0D-Tensor with trick temporarily * Add unittest * [CINN Support 0D-Tensor] CINN hack squeeze2 with trick temporarily (#53454) * fix test_autograd_dynamic (#53473) Co-authored-by: zhwesky2010 <zhouwei25@baidu.com> --------- Co-authored-by: YangQun <qun.yang@intel.com> Co-authored-by: HongyuJia <jiahongyu@baidu.com> Co-authored-by: HydrogenSulfate <490868991@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

3 participants