- Notifications
You must be signed in to change notification settings - Fork 5.9k
Closed
Labels
Description
一、背景
0-Size Tensor是指,Tensor.shape中存在某维度值为0,进而该Tensor虽有shape、dtype、place等信息,但他的元素个数为0。在切割等特定业务中,0-Size出现频率较高。Paddle目前对0-Size的支持非常薄弱,需要系统性摸排、建设。
参与本项活动,你将学习到Paddle算子库框架的设计,并了解Paddle动态图的整体执行流程。对深度学习框架0-Size业务有深刻了解。也有可能会涉猎到Paddle组合算子机制和编译器符号推导机制。遇到问题我们将有专人解答。
二、任务描述
2.1 任务介绍
本次开源任务主要用于完善Paddle的一些API对0-size Tensor的支持,这些不支持0-size Tensor的API遇到0-size Tensor时会出现例如精度异常、Coredump、Cuda Error、报错等问题。这些存在问题的API大概率是由于Kernel没有考虑到0-Size场景而产生的,当然也涉及到API shape的推导。 其他的成因需要具体分析。需要修复的API如下:
Important
每个任务难度:0.025×🌟
题目讲解见录屏文件:https://meeting.tencent.com/crm/l59EWmRZc4 (00:16:00~00:36:00)
详细介绍见:
本issue中任务做完后,可以接着做
| 序号 | API | 报名人/状态 | 分类标注 | CPU/GPU/testcase |
|---|---|---|---|---|
| 1 | paddle.abs | @DanielSun11 | coredump | testcase |
| 2 | paddle.acos | @DanielSun11 | coredump | testcase |
| 3 | paddle.acosh | @DanielSun11 | coredump | testcase |
| 4 | paddle.addmm | @co63oc | paddle_error | gpu,cpu |
| 5 | paddle.allclose | @co63oc | coredump | gpu |
| 6 | paddle.amax | @DrRyanHuang @co63oc | paddle_error | gpu,cpu |
| 7 | paddle.amin | @co63oc | paddle_error | gpu,cpu |
| 8 | paddle.angle | @co63oc | coredump | gpu |
| 9 | paddle.argmax | @DrRyanHuang @co63oc @zhangdongq @luyl975 | paddle_error | gpu,cpu |
| 10 | paddle.argmin | @DrRyanHuang @co63oc @zhangdongq | paddle_error | gpu,cpu |
| 11 | paddle.argsort | @enkilee | coredump | testcase |
| 12 | paddle.as_real | @enkilee @co63oc @luyl975 | paddle_error | testcase |
| 13 | paddle.asin | @DanielSun11 | coredump | testcase |
| 14 | paddle.asinh | @DanielSun11 | coredump | testcase |
| 15 | paddle.atan | @DanielSun11 | coredump | testcase |
| 16 | paddle.atan2 | @DanielSun11 | coredump | gpu,cpu |
| 17 | paddle.atanh | @DanielSun11 | coredump | testcase |
| 18 | paddle.autograd.hessian | @DanielSun11 | cpu | |
| 19 | paddle.bitwise_and | @wanghuancoder | cpu | |
| 20 | paddle.bitwise_invert | @co63oc | coredump | testcase |
| 21 | paddle.bitwise_left_shift | @wanghuancoder | cpu | |
| 22 | paddle.bitwise_not | @co63oc | coredump | testcase |
| 23 | paddle.bitwise_or | @wanghuancoder | cpu | |
| 24 | paddle.bitwise_right_shift | @wanghuancoder | cpu | |
| 25 | paddle.bitwise_xor | @wanghuancoder | cpu | |
| 26 | paddle.bmm | @co63oc | paddle_error | gpu,cpu |
| 27 | paddle.broadcast_to | @DanielSun11 | cpu | |
| 28 | paddle.bucketize | @co63oc | coredump | gpu |
| 29 | paddle.cartesian_prod | @co63oc | coredump | gpu |
| 30 | paddle.ceil | @DanielSun11 | coredump | testcase |
| 31 | paddle.chunk | @DanielSun11 @enkilee | coredump | cpu |
| 32 | paddle.clip | @co63oc | coredump | testcase |
| 33 | paddle.column_stack | @DanielSun11 | paddle_error | gpu,cpu |
| 34 | paddle.complex | @co63oc | coredump | gpu,cpu |
| 35 | paddle.concat | @DanielSun11 | paddle_error | cpu |
| 36 | paddle.cos | @DanielSun11 | coredump | testcase |
| 37 | paddle.cosh | @DanielSun11 | coredump | testcase |
| 38 | paddle.cross | @DanielSun11 | coredump | gpu,cpu |
| 39 | paddle.cummax | @co63oc | coredump | gpu,cpu |
| 40 | paddle.cummin | @co63oc | coredump | gpu,cpu |
| 41 | paddle.cumsum | @co63oc | coredump | gpu |
| 42 | paddle.cumulative_trapezoid | @DanielSun11 | paddle_error | gpu,cpu |
| 43 | paddle.diag | @VVX94 @co63oc | paddle_error | gpu,cpu |
| 44 | paddle.diag_embed | @co63oc | coredump | gpu |
| 45 | paddle.diagonal | @co63oc | coredump, paddle_error | gpu |
| 46 | paddle.diff | @co63oc | coredump | cpu |
| 47 | paddle.digamma | @co63oc | coredump | gpu |
| 48 | paddle.dist | @co63oc | paddle_error | gpu,cpu |
| 49 | paddle.dsplit | @DanielSun11 | coredump | cpu |
| 50 | paddle.dstack | @DanielSun11 | paddle_error | gpu,cpu |
| 51 | paddle.einsum | @DanielSun11 | coredump | gpu |
| 52 | paddle.equal | @wanghuancoder | cpu | |
| 53 | paddle.erfinv | @co63oc | coredump | testcase |
| 54 | paddle.exp | @co63oc @Flowow-zjw | coredump | testcase |
| 55 | paddle.expand | @co63oc | coredump | gpu |
| 56 | paddle.expm1 | @co63oc @Flowow-zjw | coredump | testcase |
| 57 | paddle.fft.fft2 | @co63oc | paddle_error | gpu,cpu |
| 58 | paddle.fft.fftn | @co63oc | paddle_error | gpu,cpu |
| 59 | paddle.fft.fftshift | @co63oc | coredump | gpu |
| 60 | paddle.fft.ifft2 | @co63oc | paddle_error | gpu,cpu |
| 61 | paddle.fft.ifftn | @co63oc | paddle_error | gpu,cpu |
| 62 | paddle.fft.ifftshift | @co63oc | coredump | gpu |
| 63 | paddle.fft.ihfft2 | @co63oc | paddle_error | gpu,cpu |
| 64 | paddle.fft.ihfftn | @co63oc | paddle_error | gpu,cpu |
| 65 | paddle.fft.rfft2 | @co63oc | paddle_error | gpu,cpu |
| 66 | paddle.fft.rfftn | @co63oc | paddle_error | gpu,cpu |
| 67 | paddle.flip | @co63oc | coredump | gpu |
| 68 | paddle.floor | @DanielSun11 | coredump | testcase |
| 69 | paddle.fmax | @DanielSun11 | coredump | gpu,cpu |
| 70 | paddle.fmin | @DanielSun11 | coredump | gpu,cpu |
| 71 | paddle.frac | @DanielSun11 @Flowow-zjw @straigrand @co63oc | coredump | gpu |
| 72 | paddle.frexp | @DanielSun11 @co63oc | coredump | gpu |
| 73 | paddle.gammaln | @DanielSun11 | coredump | gpu |
| 74 | paddle.hsplit | @DanielSun11 | coredump | testcase |
| 75 | paddle.hstack | @DanielSun11 @enkilee | paddle_error | cpu |
| 76 | paddle.i0 | @co63oc | coredump | testcase |
| 77 | paddle.i0e | @co63oc | coredump | testcase |
| 78 | paddle.i1 | @co63oc | coredump | testcase |
| 79 | paddle.i1e | @co63oc | coredump | testcase |
| 80 | paddle.imag | @DanielSun11 | coredump | gpu |
| 81 | paddle.incubate.nn.functional. blha_get_max_len | @co63oc | coredump | gpu |
| 82 | paddle.incubate.nn.functional. fused_bias_act | @HeyDavid633 @co63oc | coredump | gpu |
| 83 | paddle.incubate.nn.functional. fused_feedforward | @DanielSun11 | coredump | gpu |
| 84 | paddle.incubate.nn.functional. fused_layer_norm | @co63oc | coredump | gpu |
| 85 | paddle.incubate.nn.functional. fused_matmul_bias | @HeyDavid633 @co63oc | cpu | |
| 86 | paddle.incubate.nn.functional. fused_rms_norm | @co63oc | coredump | gpu |
| 87 | paddle.incubate.nn.functional. fused_rotary_position_embedding | @DanielSun11 | coredump | gpu |
| 88 | paddle.incubate.nn.functional. variable_length_memory_efficient_attention | @co63oc | coredump | gpu |
| 89 | paddle.incubate.softmax_mask_fuse | @co63oc | coredump | gpu |
| 90 | paddle.index_fill | @co63oc | coredump | gpu,cpu |
| 91 | paddle.inner | @co63oc | paddle_error | gpu,cpu |
| 92 | paddle.isfinite | @co63oc | coredump | gpu |
| 93 | paddle.isin | @co63oc | coredump | gpu,cpu |
| 94 | paddle.isinf | @co63oc | coredump | gpu |
| 95 | paddle.isnan | @co63oc | coredump | gpu |
| 96 | paddle.isneginf | @co63oc | coredump | gpu |
| 97 | paddle.isposinf | @co63oc | coredump | gpu |
| 98 | paddle.kron | @co63oc | coredump, paddle_error | gpu,cpu |
| 99 | paddle.kthvalue | @co63oc | coredump | gpu |
| 100 | paddle.ldexp | @co63oc @enkilee | coredump | testcase |
| 101 | paddle.lerp | @co63oc | coredump, paddle_error | gpu,cpu |
| 102 | paddle.lgamma | @co63oc | coredump | gpu |
| 103 | paddle.linalg.cholesky_solve | @DanielSun11 | paddle_error | gpu,cpu |
| 104 | paddle.linalg.cond | @DanielSun11 | paddle_error | gpu,cpu |
| 105 | paddle.linalg.cov | @co63oc | shape_diff, paddle_error | gpu,cpu |
| 106 | paddle.linalg.det | @co63oc | coredump | gpu,cpu |
| 107 | paddle.linalg.inv | @co63oc | paddle_error | cpu |
| 108 | paddle.linalg.lstsq | @DanielSun11 | shape_diff, coredump, paddle_error | gpu,cpu |
| 109 | paddle.linalg.matrix_norm | @co63oc | coredump, paddle_error | gpu,cpu |
| 110 | paddle.linalg.matrix_power | @DanielSun11 | paddle_error | gpu,cpu |
| 111 | paddle.linalg.matrix_rank | @DanielSun11 | paddle_error | gpu,cpu |
| 112 | paddle.linalg.multi_dot | @co63oc | paddle_error | gpu,cpu |
| 113 | paddle.linalg.norm | @co63oc | paddle_error | gpu,cpu |
| 114 | paddle.linalg.pinv | @co63oc | paddle_error | gpu,cpu |
| 115 | paddle.linalg.slogdet | @co63oc | coredump | gpu |
| 116 | paddle.linalg.solve | @DanielSun11 | paddle_error | gpu,cpu |
| 117 | paddle.linalg.svd_lowrank | @co63oc | paddle_error | gpu,cpu |
| 118 | paddle.linalg.triangular_solve | @co63oc | cpu | |
| 119 | paddle.linalg.vector_norm | @co63oc | coredump, paddle_error | gpu,cpu |
| 120 | paddle.log | @enkilee | coredump | testcase |
| 121 | paddle.log10 | @co63oc | coredump | testcase |
| 122 | paddle.log1p | @co63oc | coredump | testcase |
| 123 | paddle.log2 | @co63oc | coredump | testcase |
| 124 | paddle.logaddexp | @co63oc | coredump | gpu |
| 125 | paddle.logcumsumexp | @co63oc | coredump | gpu |
| 126 | paddle.logical_and | @wanghuancoder | cpu | |
| 127 | paddle.logical_or | @wanghuancoder | cpu | |
| 128 | paddle.logical_xor | @wanghuancoder | cpu | |
| 129 | paddle.logit | @co63oc | coredump | testcase |
| 130 | paddle.logsumexp | @DrRyanHuang @co63oc | paddle_error | gpu,cpu |
| 131 | paddle.masked_fill | @co63oc | coredump | gpu,cpu |
| 132 | paddle.masked_select | @co63oc | coredump, paddle_error | cpu |
| 133 | paddle.matmul | @co63oc | coredump | gpu,cpu |
| 134 | paddle.meshgrid | @DanielSun11 | coredump | gpu |
| 135 | paddle.minimum | @VVX94 | cpu | |
| 136 | paddle.mm | @co63oc | coredump, paddle_error | gpu,cpu |
| 137 | paddle.multigammaln | @DanielSun11 @co63oc | coredump | gpu |
| 138 | paddle.mv | @co63oc | coredump | gpu,cpu |
| 139 | paddle.nan_to_num | @co63oc | coredump | gpu |
| 140 | paddle.nanmean | @co63oc | coredump | gpu |
| 141 | paddle.nanmedian | @co63oc | coredump | gpu |
| 142 | paddle.nanquantile | @co63oc | coredump | gpu,cpu |
| 143 | paddle.nansum | @co63oc | coredump | gpu |
| 144 | paddle.nextafter | @enkilee | coredump | gpu,cpu |
| 145 | paddle.nn.functional.adaptive_avg_pool1d | @co63oc | coredump, paddle_error | gpu,cpu |
| 146 | paddle.nn.functional.adaptive_avg_pool2d | @co63oc | coredump | gpu |
| 147 | paddle.nn.functional.adaptive_avg_pool3d | @co63oc | coredump | gpu |
| 148 | paddle.nn.functional.adaptive_max_pool1d | @co63oc | coredump | gpu,cpu |
| 149 | paddle.nn.functional.adaptive_max_pool2d | @co63oc | coredump | gpu,cpu |
| 150 | paddle.nn.functional.adaptive_max_pool3d | @co63oc | coredump | gpu,cpu |
| 151 | paddle.nn.functional.affine_grid | @co63oc @DanielSun11 | cpu | |
| 152 | paddle.nn.functional.avg_pool1d | @co63oc | coredump | gpu |
| 153 | paddle.nn.functional.avg_pool2d | @co63oc | coredump | gpu |
| 154 | paddle.nn.functional.avg_pool3d | @co63oc | coredump | gpu |
| 155 | paddle.nn.functional.batch_norm | @co63oc | cpu | |
| 156 | paddle.nn.functional. binary_cross_entropy_with_logits | @co63oc | coredump | testcase |
| 157 | paddle.nn.functional.celu | @DanielSun11 | coredump | testcase |
| 158 | paddle.nn.functional.channel_shuffle | @co63oc | coredump | gpu |
| 159 | paddle.nn.functional.conv1d | @inaomIIsfarell @co63oc | coredump | gpu |
| 160 | paddle.nn.functional.conv1d_transpose | @inaomIIsfarell @co63oc | coredump | gpu |
| 161 | paddle.nn.functional.conv2d | @co63oc | coredump | gpu,cpu |
| 162 | paddle.nn.functional.conv2d_transpose | @co63oc | coredump | gpu,cpu |
| 163 | paddle.nn.functional.conv3d | @co63oc | coredump | gpu |
| 164 | paddle.nn.functional.conv3d_transpose | @co63oc | coredump | gpu |
| 165 | paddle.nn.functional.cosine_similarity | @co63oc | shape_diff, coredump | gpu,cpu |
| 166 | paddle.nn.functional.ctc_loss | @co63oc | coredump | gpu,cpu |
| 167 | paddle.nn.functional.elu | @DanielSun11 | coredump | testcase |
| 168 | paddle.nn.functional.flashmask_attention | coredump | gpu | |
| 169 | paddle.nn.functional.fold | @co63oc | cpu | |
| 170 | paddle.nn.functional.fractional_max_pool2d | @co63oc | coredump | gpu |
| 171 | paddle.nn.functional.fractional_max_pool3d | @co63oc | coredump | gpu |
| 172 | paddle.nn.functional.gelu | @co63oc | coredump | gpu |
| 173 | paddle.nn.functional.glu | @co63oc | coredump | cpu |
| 174 | paddle.nn.functional.grid_sample | @co63oc | coredump | gpu,cpu |
| 175 | paddle.nn.functional.group_norm | @co63oc | coredump | gpu |
| 176 | paddle.nn.functional.hardshrink | @DanielSun11 | coredump | testcase |
| 177 | paddle.nn.functional.hardsigmoid | @DanielSun11 | coredump | testcase |
| 178 | paddle.nn.functional.hardswish | @co63oc | coredump | testcase |
| 179 | paddle.nn.functional.hardtanh | @DanielSun11 | coredump | testcase |
| 180 | paddle.nn.functional.hinge_embedding_loss | @co63oc | coredump | gpu |
| 181 | paddle.nn.functional.interpolate | @co63oc | coredump | gpu |
| 182 | paddle.nn.functional.kl_div | @co63oc | coredump | gpu |
| 183 | paddle.nn.functional.l1_loss | @co63oc | coredump | gpu |
| 184 | paddle.nn.functional.label_smooth | @co63oc | coredump | testcase |
| 185 | paddle.nn.functional.layer_norm | @co63oc | coredump | gpu,cpu |
| 186 | paddle.nn.functional.leaky_relu | @co63oc | coredump | testcase |
| 187 | paddle.nn.functional.linear | @co63oc @DanielSun11 | cpu | |
| 188 | paddle.nn.functional.local_response_norm | @co63oc | coredump | gpu |
| 189 | paddle.nn.functional.log_sigmoid | @DanielSun11 | coredump | testcase |
| 190 | paddle.nn.functional.log_softmax | @co63oc | coredump | gpu |
| 191 | paddle.nn.functional.lp_pool1d | @co63oc | coredump | gpu |
| 192 | paddle.nn.functional.lp_pool2d | @co63oc | coredump | gpu |
| 193 | paddle.nn.functional.margin_ranking_loss | @co63oc | coredump | gpu |
| 194 | paddle.nn.functional.max_pool1d | @co63oc | coredump | gpu,cpu |
| 195 | paddle.nn.functional.max_pool2d | @co63oc | coredump | gpu,cpu |
| 196 | paddle.nn.functional.max_pool3d | @co63oc | coredump | gpu,cpu |
| 197 | paddle.nn.functional.max_unpool1d | @co63oc | coredump | gpu |
| 198 | paddle.nn.functional.max_unpool2d | @co63oc | coredump | gpu |
| 199 | paddle.nn.functional.max_unpool3d | @co63oc | coredump | gpu |
| 200 | paddle.nn.functional.maxout | @co63oc | coredump | gpu |
| 201 | paddle.nn.functional.mish | @DanielSun11 | coredump | testcase |
| 202 | paddle.nn.functional.mse_loss | @co63oc | coredump | gpu |
| 203 | paddle.nn.functional.multi_margin_loss | @co63oc | coredump | gpu |
| 204 | paddle.nn.functional.normalize | @co63oc | coredump, paddle_error | gpu |
| 205 | paddle.nn.functional.pad | @co63oc | coredump | gpu |
| 206 | paddle.nn.functional.pairwise_distance | @co63oc | coredump, paddle_error | gpu,cpu |
| 207 | paddle.nn.functional.pixel_shuffle | @co63oc | coredump | gpu |
| 208 | paddle.nn.functional.pixel_unshuffle | @co63oc | coredump | gpu |
| 209 | paddle.nn.functional.poisson_nll_loss | @co63oc | coredump | gpu |
| 210 | paddle.nn.functional.prelu | @co63oc | coredump | gpu |
| 211 | paddle.nn.functional.relu | @DanielSun11 | coredump | testcase |
| 212 | paddle.nn.functional.relu6 | @co63oc | coredump | testcase |
| 213 | paddle.nn.functional.rrelu | @HuangJunze2003 @co63oc | coredump | gpu |
| 214 | paddle.nn.functional. scaled_dot_product_attention | @DanielSun11 | coredump | gpu |
| 215 | paddle.nn.functional.selu | @DanielSun11 | coredump | gpu |
| 216 | paddle.nn.functional.sequence_mask | @co63oc | coredump | gpu |
| 217 | paddle.nn.functional.sigmoid | @DanielSun11 | coredump | testcase |
| 218 | paddle.nn.functional.sigmoid_focal_loss | @co63oc | coredump | testcase |
| 219 | paddle.nn.functional.silu | @DanielSun11 | coredump | testcase |
| 220 | paddle.nn.functional.soft_margin_loss | @co63oc | coredump | testcase |
| 221 | paddle.nn.functional.softmax | @kyrie-79 @co63oc | coredump | gpu |
| 222 | paddle.nn.functional.softplus | @DanielSun11 | coredump | testcase |
| 223 | paddle.nn.functional.softshrink | @DanielSun11 | coredump | testcase |
| 224 | paddle.nn.functional.softsign | @DanielSun11 | coredump | testcase |
| 225 | paddle.nn.functional.square_error_cost | @co63oc | coredump | gpu |
| 226 | paddle.nn.functional.swish | @co63oc | coredump | testcase |
| 227 | paddle.nn.functional.tanh | @DanielSun11 | coredump | testcase |
| 228 | paddle.nn.functional.tanhshrink | @DanielSun11 | coredump | testcase |
| 229 | paddle.nn.functional.temporal_shift | @co63oc | coredump | gpu |
| 230 | paddle.nn.functional.thresholded_relu | @DanielSun11 | coredump | testcase |
| 231 | paddle.nn.functional. triplet_margin_with_distance_loss | @co63oc | coredump, paddle_error | gpu,cpu |
| 232 | paddle.nn.functional.unfold | @enkilee | coredump | gpu,cpu |
| 233 | paddle.nn.functional.zeropad2d | @DanielSun11 | coredump | gpu |
| 234 | paddle.nn.quant.weight_only_linear | @DanielSun11 | coredump | gpu |
| 235 | paddle.nn.quant.weight_quantize | @DanielSun11 | coredump | gpu,cpu |
| 236 | paddle.nn.utils.parameters_to_vector | @co63oc | coredump | cpu |
| 237 | paddle.nn.utils.vector_to_parameters | @DanielSun11 | paddle_error | gpu,cpu |
| 238 | @DanielSun11 | |||
| 239 | paddle.outer | @co63oc | paddle_error | gpu,cpu |
| 240 | paddle.pdist | @co63oc | coredump, paddle_error | gpu |
| 241 | paddle.polar | @co63oc | coredump | gpu |
| 242 | paddle.polygamma | @co63oc | coredump | testcase |
| 243 | paddle.quantile | @Flowow-zjw @enkilee | coredump | gpu,cpu |
| 244 | paddle.real | @Flowow-zjw @straigrand @co63oc | paddle_error | gpu |
| 245 | paddle.reciprocal | @DanielSun11 | coredump | testcase |
| 246 | paddle.renorm | @co63oc | coredump | gpu |
| 247 | paddle.repeat_interleave | @co63oc | coredump | gpu,cpu |
| 248 | paddle.reshape | @co63oc | cpu | |
| 249 | paddle.reverse | @DanielSun11 | coredump | gpu |
| 250 | paddle.roll | @co63oc | coredump | gpu |
| 251 | paddle.rot90 | @co63oc | coredump | gpu |
| 252 | paddle.round | @co63oc | coredump | testcase |
| 253 | paddle.row_stack | @DanielSun11 | paddle_error | gpu,cpu |
| 254 | paddle.rsqrt | @DanielSun11 | coredump | testcase |
| 255 | paddle.searchsorted | @co63oc | coredump | gpu |
| 256 | paddle.sgn | @VVX94 | paddle_error | gpu,cpu |
| 257 | paddle.signal.istft | @co63oc | cpu | |
| 258 | paddle.sin | @DanielSun11 | coredump | testcase |
| 259 | paddle.sinc | @co63oc | coredump | gpu |
| 260 | paddle.sinh | @DanielSun11 | coredump | testcase |
| 261 | paddle.slice | @co63oc | coredump | gpu |
| 262 | paddle.sort | @enkilee | coredump | testcase |
| 263 | paddle.split | @DanielSun11 | coredump | cpu |
| 264 | paddle.sqrt | @DanielSun11 | coredump | testcase |
| 265 | paddle.square | @DanielSun11 | coredump | testcase |
| 266 | paddle.stanh | @DanielSun11 | coredump | testcase |
| 267 | paddle.std | @co63oc | paddle_error | gpu,cpu |
| 268 | paddle.strided_slice | @DanielSun11 | coredump | gpu |
| 269 | paddle.subtract | @DanielSun11 | coredump | gpu |
| 270 | paddle.take | @co63oc | coredump | gpu |
| 271 | paddle.take_along_axis | @straigrand | cpu | |
| 272 | paddle.tan | @DanielSun11 | coredump | testcase |
| 273 | paddle.tanh | @DanielSun11 | coredump | testcase |
| 274 | paddle.tensor_split | @DanielSun11 | coredump | cpu |
| 275 | paddle.Tensor.__abs__ | @DanielSun11 | coredump | testcase |
| 276 | paddle.Tensor.__getitem__ | @DanielSun11 | coredump | gpu |
| 277 | paddle.Tensor.__matmul__ | @co63oc | cpu | |
| 278 | paddle.Tensor.__pow__ | @co63oc | coredump | gpu |
| 279 | paddle.Tensor.__rmatmul__ | @co63oc | cpu | |
| 280 | paddle.Tensor.__rpow__ | @co63oc | coredump | gpu |
| 281 | paddle.Tensor.__setitem__ | @DanielSun11 | coredump | gpu,cpu |
| 282 | paddle.Tensor.__sub__ | @DanielSun11 | coredump | gpu |
| 283 | paddle.Tensor.abs | @DanielSun11 | coredump | testcase |
| 284 | paddle.Tensor.amax | @DrRyanHuang @co63oc | paddle_error | gpu,cpu |
| 285 | paddle.Tensor.amin | @co63oc | paddle_error | gpu,cpu |
| 286 | paddle.Tensor.argmax | @co63oc | paddle_error | gpu,cpu |
| 287 | paddle.Tensor.argsort | @enkilee | coredump | testcase |
| 288 | paddle.Tensor.atanh | @DanielSun11 | coredump | testcase |
| 289 | paddle.Tensor.bmm | @co63oc | paddle_error | gpu,cpu |
| 290 | paddle.Tensor.broadcast_to | @DanielSun11 | cpu | |
| 291 | paddle.Tensor.ceil | @DanielSun11 | coredump | testcase |
| 292 | paddle.Tensor.cholesky_solve | @DanielSun11 | paddle_error | gpu,cpu |
| 293 | paddle.Tensor.chunk | @DanielSun11 @enkilee | coredump | cpu |
| 294 | paddle.Tensor.clip | @co63oc | coredump | testcase |
| 295 | paddle.Tensor.cos | @DanielSun11 | coredump | testcase |
| 296 | paddle.Tensor.cumsum | @co63oc | coredump | gpu |
| 297 | paddle.Tensor.diag_embed | @co63oc | coredump | gpu |
| 298 | paddle.Tensor.diagonal | @co63oc | paddle_error | gpu |
| 299 | paddle.Tensor.digamma | @co63oc | coredump | gpu |
| 300 | paddle.Tensor.erfinv | @co63oc | coredump | testcase |
| 301 | paddle.Tensor.exp | @co63oc | coredump | testcase |
| 302 | paddle.Tensor.expand_as | @co63oc | paddle_error | gpu,cpu |
| 303 | paddle.Tensor.fill_diagonal_ | @co63oc | coredump | gpu |
| 304 | paddle.Tensor.flip | @co63oc | coredump | gpu |
| 305 | paddle.Tensor.floor | @DanielSun11 | coredump | testcase |
| 306 | paddle.Tensor.frexp | @DanielSun11 @co63oc | coredump | gpu |
| 307 | paddle.Tensor.imag | @DanielSun11 | coredump | gpu |
| 308 | paddle.Tensor.inner | @co63oc | paddle_error | gpu,cpu |
| 309 | paddle.Tensor.isnan | @co63oc | coredump | gpu |
| 310 | paddle.Tensor.kthvalue | @co63oc | coredump | gpu,cpu |
| 311 | paddle.Tensor.lerp | @co63oc | paddle_error | gpu,cpu |
| 312 | paddle.Tensor.lgamma | @co63oc | coredump | gpu |
| 313 | paddle.Tensor.log | @co63oc @enkilee | coredump | testcase |
| 314 | paddle.Tensor.log10 | @co63oc | coredump | testcase |
| 315 | paddle.Tensor.log1p | @co63oc | coredump | testcase |
| 316 | paddle.Tensor.logit | @co63oc | coredump | testcase |
| 317 | paddle.Tensor.lu | @DanielSun11 | paddle_error | gpu,cpu |
| 318 | paddle.Tensor.matmul | @crashbussy @co63oc | shape_diff, paddle_error | gpu,cpu |
| 319 | paddle.Tensor.median | @co63oc | paddle_error | gpu,cpu |
| 320 | paddle.Tensor.mm | @co63oc | paddle_error | gpu,cpu |
| 321 | paddle.Tensor.mode | @co63oc | paddle_error | gpu,cpu |
| 322 | paddle.Tensor.multigammaln | @DanielSun11 @co63oc | coredump | gpu |
| 323 | paddle.Tensor.nansum | @co63oc | coredump | gpu |
| 324 | @DanielSun11 | |||
| 325 | paddle.Tensor.outer | @co63oc | paddle_error | gpu |
| 326 | paddle.Tensor.quantile | @enkilee | coredump | gpu,cpu |
| 327 | paddle.Tensor.reciprocal | @DanielSun11 | coredump | testcase |
| 328 | paddle.Tensor.remainder | @jerric-charon | cpu | |
| 329 | paddle.Tensor.repeat_interleave | @co63oc | coredump | gpu |
| 330 | paddle.Tensor.rot90 | @co63oc | coredump | gpu |
| 331 | paddle.Tensor.round | @DanielSun11 | coredump | testcase |
| 332 | paddle.Tensor.rsqrt | @DanielSun11 | coredump | testcase |
| 333 | paddle.Tensor.sigmoid | @DanielSun11 | coredump | testcase |
| 334 | paddle.Tensor.sin | @DanielSun11 | coredump | testcase |
| 335 | paddle.Tensor.split | @DanielSun11 | coredump | cpu |
| 336 | paddle.Tensor.sqrt | @DanielSun11 | coredump | testcase |
| 337 | paddle.Tensor.square | @DanielSun11 | coredump | testcase |
| 338 | paddle.Tensor.std | @co63oc | paddle_error | gpu,cpu |
| 339 | paddle.Tensor.subtract | @DanielSun11 | coredump | gpu |
| 340 | paddle.Tensor.tanh | @DanielSun11 | coredump | testcase |
| 341 | paddle.Tensor.tile | @co63oc | coredump | gpu |
| 342 | paddle.Tensor.topk | @co63oc | paddle_error | gpu,cpu |
| 343 | paddle.Tensor.tril | @crashbussy | coredump | gpu |
| 344 | paddle.Tensor.trunc | @co63oc | coredump | gpu |
| 345 | paddle.Tensor.var | @co63oc | paddle_error | gpu,cpu |
| 346 | paddle.tensordot | @DanielSun11 | paddle_error | gpu,cpu |
| 347 | paddle.tile | @co63oc | coredump | gpu,cpu |
| 348 | paddle.trace | @DanielSun11 | coredump | gpu |
| 349 | paddle.trapezoid | @DanielSun11 | paddle_error | cpu |
| 350 | paddle.tril | @crashbussy | coredump | testcase |
| 351 | paddle.triu | @crashbussy | coredump | testcase |
| 352 | paddle.trunc | @co63oc | coredump | testcase |
| 353 | paddle.unflatten | @straigrand @luyl975 | paddle_error | cpu |
| 354 | paddle.unique | @ccsuzzh @co63oc | coredump | gpu,cpu |
| 355 | paddle.unique_consecutive | @ccsuzzh @co63oc | coredump | cpu |
| 356 | paddle.var | @co63oc | paddle_error | cpu |
| 357 | paddle.vision.ops.box_coder | @co63oc | coredump | gpu |
| 358 | paddle.vision.ops.generate_proposals | @co63oc | coredump | gpu,cpu |
| 359 | paddle.vision.ops.prior_box | @co63oc | coredump | gpu |
| 360 | paddle.vision.ops.yolo_box | @co63oc | coredump | gpu |
| 361 | paddle.vstack | @DanielSun11 | paddle_error | cpu |
| 362 | paddle.where | @co63oc | coredump | gpu |
看板信息
| 任务方向 | 任务数量 | 提交作品 / 任务认领 | 提交率 | 完成 | 完成率 |
|---|---|---|---|---|---|
| 0-size Tensor的支持任务 | 362 | 359 / 361 | 99.17% | 358 | 98.9% |
统计信息
排名不分先后 @DanielSun11 (101) @co63oc (231) @enkilee (11) @wanghuancoder (9) @VVX94 (1) @straigrand (1) @crashbussy (3) @luyl975 (1)
Metadata
Metadata
Assignees
Labels
Type
Projects
Status
Done