[0-size Tensor No.81、138、141] Add 0-size Tensor support for blha_get_max_len #72937
Add this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.
PR Category
Execute Infrastructure
PR Types
Improvements
Description
blha_get_max_len 调用了max ,numpy中max不支持0-size,所以没有增加单测,只是修改kernel支持PaddleAPITest 测试运行
修改包含前向,没有反向
infermeta没有修改
kernel修改gpu/xpu,返回结果为一个整数保存在cpu后端中
numpy中max不支持0-size


PaddleAPITest 测试运行通过
mv 修改前向和反向

infermeta 不用修改
kernel 修改 cpu/gpu,前向共用impl实现
已修改测试用例, x末尾列需要和vec维度值相同,所以测试用例为
[0, 100], [100]
[100, 0], [0]
[0], [0]
PaddleAPITest 测试已修复cuda error, paddle, error, 错误为torch error
nanmedian 修改前向和反向
infermeta修改设置维度
修改cpu/gpu kernel
返回值需要Resize,Out返回NAN,median_index返回0
PaddleAPITest测试通过
