Skip to content

Conversation

@AndSonder
Copy link
Contributor

PR Category

Auto Parallel

PR Types

Improvements

Description

对于如下情况

# 摘自 PaddleMIX:https://github.com/PaddlePaddle/PaddleMIX/blob/8b896d533811a3500af3064c5f1952b77003d4c8/ppdiffusers/ppdiffusers/models/unet_2d_blocks.py#L1149-L1155 def custom_forward(*inputs): ... 

使用 bound_args.arguments 是错误的,无论传入多少个值,bound_args.arguments 只有一个值,就是打包后的 inputs。

由于 PyLayer 中 kwargs 没有梯度的这个 feature 被部分功能利用了,直接修改 PyLayer 会影响这些功能。

所以本PR 针对这种 Case,直接报错提示。

相关 PR:

@paddle-bot
Copy link

paddle-bot bot commented Apr 25, 2024

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@AndSonder AndSonder marked this pull request as ready for review April 27, 2024 03:26
@paddle-ci-bot
Copy link

paddle-ci-bot bot commented May 5, 2024

Sorry to inform you that b71ac65's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

Copy link
Member

@ForFishes ForFishes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@AndSonder
Copy link
Contributor Author

本 PR 申请 Coverage CI 豁免。本 PR 主要是对之前说的会出错的 case 进行拦截, 除了拦截的case,其他 case 对应的单测已经都加了 @luotao1

@luotao1 luotao1 merged commit 7535134 into PaddlePaddle:develop May 7, 2024
FlamingoPg pushed a commit to FlamingoPg/Paddle that referenced this pull request May 7, 2024
【Hackathon 6th No.35】support kwargs for recompute when use_reentrant == True fix (PaddlePaddle#63880) * support kwargs for recompute when use_reentrant == True * recover third party merge main lint delete printf change flash attn version
FlamingoPg pushed a commit to FlamingoPg/Paddle that referenced this pull request May 7, 2024
…== True fix (PaddlePaddle#63880) * support kwargs for recompute when use_reentrant == True * recover third party
FlamingoPg pushed a commit to FlamingoPg/Paddle that referenced this pull request May 7, 2024
add int4_1 int4_2 FLAGS_logging_pir_py_code (PaddlePaddle#63981) * FLAGS_logging_pir_py_code * FLAGS_logging_pir_py_code_dir --------- Co-authored-by: jiahy0825 <jiahongyu@baidu.com> [Cleanup] Remove Flake8 config in `.editorconfig` (PaddlePaddle#64027) 【PIR Dist Op Reg No.19】 reg pull_box_sparse (PaddlePaddle#62982) * fix * fix * fix * fix * fix * fix * add test * add * fix * fix * add out * fix * codestyle * fix * fix backward * merge [Dy2St][PIR] Hold backward program in GradNode (PaddlePaddle#63694) Co-authored-by: xiongkun <xiongkun03@baidu.com> Co-authored-by: Nyakku Shigure <sigure.qaq@gmail.com> split test.cmake: add new test_cases.cmake (PaddlePaddle#64007) [PIR] Support sparse_slice and sparse_sum in pt (PaddlePaddle#64009) * support sparse_slice and sparse_sum in pt * support sparse_slice and sparse_sum in pt * support sparse_slice and sparse_sum in pt option for WITH_CPP_TEST (PaddlePaddle#63896) * option for WITH_CPP_TEST * fix * Fix * Fix [PIR] Fix `attributes_num` of `SliceArrayOp` (PaddlePaddle#64013) [Dy2St] Use `full_graph=True` outside dy2st uts (part1) (PaddlePaddle#64058) [Dy2St] Use `full_graph=True` outside dy2st uts (part2) (PaddlePaddle#64059) fix typo (PaddlePaddle#64060) Co-authored-by: jiahy0825 <jiahongyu@baidu.com> update (PaddlePaddle#64042) Replace paddle/fluid/platform/device/gpu/gpu_dnn.h (PaddlePaddle#63819) * Fix * Fix * Fix Clean lookup_table_v2_op.h lookup_table_v2_op.cu (PaddlePaddle#64020) * Fix * ci refine GetTensorListFromArgs (PaddlePaddle#64045) Revert "【Hackathon 6th Fundable Projects 3 No.60】Remove fluid operator chunk_…" (PaddlePaddle#64050) This reverts commit 88b1a6e. [Prim][PIR] support floor_divide op forward in prim pir (PaddlePaddle#64023) * floor-div-dev * update test [CINN] Reconstruct shape_analysis (PaddlePaddle#63790) * reconstruct shape_analysis * fix input value shape infer * fix merge bugs * fix concat and gather op InferSymbolicShape * fix merge bug * fix value_to_shape_or_data hash error and add some checks * fix set shape for null value * fix group op lazy infer * add IsStaticShape check * fix merge bug * support static dim check and set for VectorType * change auto to detail type [XPU] fix bugs in processing of attention_mask and fix_seed_offset on XPU (PaddlePaddle#64003) * [XPU] fix segmentfault caused by setting fix_seed_offset on XPU * cast attention_mask to float32 when necessary fix merge bug (PaddlePaddle#64069) 【Fix PIR Unittest No.125、147、481】Fix some 0D uts in PIR mode (part1) (PaddlePaddle#64064) [Prim][VJP]support autogen to remove unused composite in .yaml (PaddlePaddle#64054) * support autogen to remove unused composite in .yaml * fix bug [PIR] Fix typo `set_pit_tests_properties` -> `set_pir_tests_properties` (PaddlePaddle#64063) [Dy2St] Use `full_graph=True` outside dy2st uts (part3) (PaddlePaddle#64066) [PIR save/load] Open more tests for paddle.save and paddle.load (PaddlePaddle#64044) * open more tests for paddle.save and paddle.load * fix API Improvement for paddle.nn.functional.group_norm and paddle.nn.GroupNorm (PaddlePaddle#63881) * update group_norm * update trt plugin * update trt plugin * fix trt plugin * fix trt plugin * fix test * fix test * fix ci windows inference * update kernel function names and add v2 test * fix * fix fp16 test Revert "【Hackathon 6th Fundable Projects 3 No.81】Remove fluid operators ctc_a…" (PaddlePaddle#64049) This reverts commit 2134ead. Clean paddle/fluid/operators/fused/attention_layer_norm.h (PaddlePaddle#64051) * Fix * Fix Replace operators::math to phi::math in fluid/operators (PaddlePaddle#63854) [CINN]Clean usless loop_reorder_aligment tactic (PaddlePaddle#63998) * [CINN]Clean usless loop_reorder_aligment tactic * fix source 【Hackathon 6th Fundable Projects 3 No.396】fluid operator yolo_box_head (PaddlePaddle#63783) * Fix * Fix * Fix * Fix * Fix 【Hackathon 6th Fundable Projects 3 No.240】fluid operator moe (PaddlePaddle#63929) 【Hackathon 6th Fundable Projects 3 No.82】fluid operator cudnn_lstm (PaddlePaddle#63936) * Fix * Fix * Fix * Fix [CINN] Remove useless log (PaddlePaddle#64052) [pir_save_load] add pir for test_jit_save_load.py (PaddlePaddle#63958) * add jit load.train * modify backward program lost * modify * combine eval and train * modify 8 case of jit.save.load * modify jit_save_load case * rename jit_save_load * change name all * modify timeout * modify new case * modify TestJitSaveLoadMultiMethods * modify cpu tensor no holder bug Flashattention support qkvpacked and varlen (PaddlePaddle#63289) * Flashattention support qkvpacked and varlen * fix codestyle * fix codestyle * FlashAttention kvReduceGQA Performance Optimization * Fix problem with windows * code clean * update third_party/flashattn * update errormsg and docs * update api * update doc * update doctest * update doc, test=document_fix * update doc, test=document_fix * Update python/paddle/nn/functional/flash_attention.py Co-authored-by: zachary sun <70642955+sunzhongkai588@users.noreply.github.com> * Update python/paddle/nn/functional/flash_attention.py Co-authored-by: zachary sun <70642955+sunzhongkai588@users.noreply.github.com> * update doc --------- Co-authored-by: zachary sun <70642955+sunzhongkai588@users.noreply.github.com> 【PIR Dist Op Reg No.20】 reg global_gather (PaddlePaddle#63867) * reg global_gather * reg global_gather * reg_global_gather * fix * fix * fix * fix conflict * fix conflict * Update ops_api_gen.py * Update ops_api_gen.py Fix backward program kwargs error when process inplace value (PaddlePaddle#63939) 【Hackathon 6th No.35】support kwargs for recompute when use_reentrant == True fix (PaddlePaddle#63880) * support kwargs for recompute when use_reentrant == True * recover third party merge main lint delete printf change flash attn version
co63oc pushed a commit to co63oc/Paddle that referenced this pull request May 10, 2024
…== True fix (PaddlePaddle#63880) * support kwargs for recompute when use_reentrant == True * recover third party
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

3 participants