Skip to content

Conversation

@juncaipeng
Copy link
Contributor

@juncaipeng juncaipeng commented Mar 29, 2023

PR types

Others

PR changes

Others

Describe

add unittest for log_softmax

@paddle-bot
Copy link

paddle-bot bot commented Mar 29, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

self.check_output(atol=1e-3)

def test_check_grad(self):
self.check_grad(['X'], ['Out'], max_relative_error=1e-2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里尝试先用默认值能否通过,如果不可以再调整

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

嗯,我测试后是需要设置max_relative_error=1e-2。

self.dtype = np.float16

def test_check_output(self):
self.check_output(atol=1e-3)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

默认值即为1e-3,无需设置

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OPTest中check_output的atol默认是1e-5

image

@PaddlePaddle PaddlePaddle locked and limited conversation to collaborators Mar 29, 2023
@PaddlePaddle PaddlePaddle unlocked this conversation Mar 29, 2023
Copy link
Contributor

@ZzSean ZzSean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@Xreki Xreki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for the setting of atol and max_relative_error

@juncaipeng juncaipeng merged commit 5208e2d into PaddlePaddle:develop Mar 29, 2023
longranger2 pushed a commit to longranger2/Paddle that referenced this pull request Mar 30, 2023
jeff41404 pushed a commit that referenced this pull request Apr 4, 2023
* remove op.py * [Zero-Dim] change Tensor.numpy() usage to other equivalent usage, avoid hack (#52197) * [BugFix] fix compute error in fused_dropout_add (#52261) * fix bg * add utest * add utest * [CodeStyle][UP034] remove (()) cases (#52060) * add up34 * modify var name in loop * revert changes in test_slice * Revert "modify var name in loop" This reverts commit 6d748e3. * temporarily ignore test_slice.py * add comment * empty commit, re-trigger all ci * fix inc --------- Co-authored-by: SigureMo <sigure.qaq@gmail.com> * [AMP OP&Test] add unittest for log_softmax (#52264) * Fix_Linux_[-Wterminate]warning (#52186) * [CustomOP Inplace] Automap inplace dtype and shape, prepare for vector<Tensor> output (#52214) * [CustomOP Inplace] Automap inplace dtype and shape, prepare for vector<Tensor> output * delete dtype,shape func of multi_inplace op * [CustomOP Inplace] Automap inplace dtype and shape, support vector<Tensor> output * [CustomOP Inplace] Auto-generate python API for inplace vector<Tensor> output * [AMP OP&Test] add float16 optest for reshape_op (#51678) * [AMP OP&Test] add float16 optest for reshape_op * add public_python_api * [AMP OP&Test] Add fp16/bf16 to clip op (#52158) * add fp16/bf16 to clip op * fix as reviewed * update test_clip_op.py * update test_clip_op.py * fix bug * fix code style * fix bug * fix bug --------- Co-authored-by: Zhou Wei <1183042833@qq.com> Co-authored-by: ShenLiang <1422485404@qq.com> Co-authored-by: 张春乔 <83450930+Liyulingyue@users.noreply.github.com> Co-authored-by: SigureMo <sigure.qaq@gmail.com> Co-authored-by: Ccc <52520497+juncaipeng@users.noreply.github.com> Co-authored-by: Galaxy1458 <55453380+Galaxy1458@users.noreply.github.com> Co-authored-by: HongyuJia <jiahongyu@baidu.com> Co-authored-by: zhaoyingli <86812880+zhaoyinglia@users.noreply.github.com> Co-authored-by: wuyefeilin <30919197+wuyefeilin@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

3 participants