Skip to content

Conversation

@wuyefeilin
Copy link
Contributor

PR types

Others

PR changes

OPs

Describe

Add fp16/bf16 to clip op

@paddle-bot
Copy link

paddle-bot bot commented Mar 26, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

self.shape = (4, 10, 10)
self.max = 0.8
self.min = 0.3
self.inputs['Max'] = np.array([0.8]).astype(self.dtype)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

对于BF16这里应该初始化为fp32,然后调用convert_float_to_uint16

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clip kernel里已经调用了类型转换,若此处调用将出现对不齐的情况
auto max_ = max.to();
auto min_ = min.to();

@ZzSean ZzSean changed the title [AMP] Add fp16/bf16 to clip op [AMP OP&Test] Add fp16/bf16 to clip op Mar 29, 2023
Copy link
Contributor

@ZzSean ZzSean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ZzSean ZzSean merged commit ad01ecc into PaddlePaddle:develop Mar 29, 2023
longranger2 pushed a commit to longranger2/Paddle that referenced this pull request Mar 30, 2023
* add fp16/bf16 to clip op * fix as reviewed * update test_clip_op.py * update test_clip_op.py
jeff41404 pushed a commit that referenced this pull request Apr 4, 2023
* remove op.py * [Zero-Dim] change Tensor.numpy() usage to other equivalent usage, avoid hack (#52197) * [BugFix] fix compute error in fused_dropout_add (#52261) * fix bg * add utest * add utest * [CodeStyle][UP034] remove (()) cases (#52060) * add up34 * modify var name in loop * revert changes in test_slice * Revert "modify var name in loop" This reverts commit 6d748e3. * temporarily ignore test_slice.py * add comment * empty commit, re-trigger all ci * fix inc --------- Co-authored-by: SigureMo <sigure.qaq@gmail.com> * [AMP OP&Test] add unittest for log_softmax (#52264) * Fix_Linux_[-Wterminate]warning (#52186) * [CustomOP Inplace] Automap inplace dtype and shape, prepare for vector<Tensor> output (#52214) * [CustomOP Inplace] Automap inplace dtype and shape, prepare for vector<Tensor> output * delete dtype,shape func of multi_inplace op * [CustomOP Inplace] Automap inplace dtype and shape, support vector<Tensor> output * [CustomOP Inplace] Auto-generate python API for inplace vector<Tensor> output * [AMP OP&Test] add float16 optest for reshape_op (#51678) * [AMP OP&Test] add float16 optest for reshape_op * add public_python_api * [AMP OP&Test] Add fp16/bf16 to clip op (#52158) * add fp16/bf16 to clip op * fix as reviewed * update test_clip_op.py * update test_clip_op.py * fix bug * fix code style * fix bug * fix bug --------- Co-authored-by: Zhou Wei <1183042833@qq.com> Co-authored-by: ShenLiang <1422485404@qq.com> Co-authored-by: 张春乔 <83450930+Liyulingyue@users.noreply.github.com> Co-authored-by: SigureMo <sigure.qaq@gmail.com> Co-authored-by: Ccc <52520497+juncaipeng@users.noreply.github.com> Co-authored-by: Galaxy1458 <55453380+Galaxy1458@users.noreply.github.com> Co-authored-by: HongyuJia <jiahongyu@baidu.com> Co-authored-by: zhaoyingli <86812880+zhaoyinglia@users.noreply.github.com> Co-authored-by: wuyefeilin <30919197+wuyefeilin@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

3 participants