- Notifications
You must be signed in to change notification settings - Fork 38
ENH: Review exported symbols; redesign test_all #315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR overhauls the testing of exported names by replacing the old test_all function with new, more focused tests and by standardizing the dir implementations across multiple array_api_compat modules. Key changes include:
- Replacing the test_all function with new tests (test_dir and test_builtins_collision) that better validate module exports.
- Removing redundant _all_ignore variables and cleaning up all definitions and dir implementations in various modules.
- Consolidating all settings in numpy, torch, dask, cupy, and common modules for more consistent behavior.
Reviewed Changes
Copilot reviewed 16 out of 16 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| tests/test_all.py | New tests using NAMES/XFAILS and updated parameterizations replacing test_all. |
| array_api_compat/torch/linalg.py | Removed _all_ignore and cleaned up all and dir definitions. |
| array_api_compat/torch/fft.py | Removed _all_ignore; standardized dir definition. |
| array_api_compat/torch/_aliases.py | Removed _all_ignore to simplify export handling. |
| array_api_compat/numpy/linalg.py | Revised all composition and removed redundant concatenation of all. |
| array_api_compat/numpy/fft.py | Modified all concatenation and removed extra deletion lines for cleanup. |
| array_api_compat/numpy/_typing.py | Removed _all_ignore. |
| array_api_compat/numpy/_aliases.py | Updated all assembly and removed _all_ignore for consistency. |
| array_api_compat/dask/array/linalg.py | Removed _all_ignore and added a dir function returning all. |
| array_api_compat/dask/array/fft.py | Removed _all_ignore and standardized dir implementation. |
| array_api_compat/dask/array/_aliases.py | Removed _all_ignore. |
| array_api_compat/cupy/_typing.py | Removed _all_ignore. |
| array_api_compat/cupy/_aliases.py | Removed _all_ignore. |
| array_api_compat/common/_linalg.py | Removed _all_ignore. |
| array_api_compat/common/_helpers.py | Removed _all_ignore. |
| array_api_compat/common/_aliases.py | Removed _all_ignore. |
| """ | ||
| from ._helpers import wrapped_libraries | ||
| | ||
| NAMES = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe a better version of this test could automatically scrape data-apis/array-api/?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you suggesting that array-api-compat should add array-api as a git submodule, for testing only?
If so, do you agree that such a change is best left to a follow-up?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you suggesting that array-api-compat should add array-api as a git submodule, for testing only?
Definitely not.
If so, do you agree that such a change is best left to a follow-up?
Absolutely yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, the purpose of test_all --- with all its considerable sins! --- was three-fold:
- make sure that the user-visible dir/all lists contain everything they should
- make sure that unwanted names do not bleed into the user-visible dir/all lists
- make sure that internal implementation modules
__all__lists are sensible
This PR seems to work for the first item; the second one seems to still allow some strange things:
In [11]: import array_api_compat.numpy as anp In [12]: "Final" in dir(anp) Out[12]: True In [13]: import numpy as np In [14]: "Final" in dir(np) Out[14]: False For the third item, maybe we should somehow check that __all__ lists do not contain duplicate items? This would be useful for development (for one recent example, I'm not entirely sure if #317 does it right with __all__ lists; would be nice to have a test support for this)
| I've heavily reworked the PR and fixed many issues where array-api-compat was hiding objects declared in the wrapped module. |
| @ev-br gentle ping |
| @ev-br gentle ping |
| Yes, thanks for the ping. My plan is to look at this and gh-321 right after 1.12 is out of the door (so that pytorch==2.7 is usable). |
| Sorry for the delay. This PR seems to have some visible effects on which names are available from the wrapped namespaces. Are these intended? |
| @ev-br this may be clearer. It shows that all changes in visibility are desirable: import yaml from array_api_compat import array_namespace import numpy as np import cupy as cp import dask.array as da import torch out = {} for bare_ns in [np, cp, da, torch]: xp = array_namespace(bare_ns.arange(3)) bare_names = set(dir(bare_ns)) xp_names = set(dir(xp)) hides = sorted(bare_names - xp_names) adds = sorted(xp_names - bare_names) out[f"array-api-compat hides from {bare_ns.__name__}"] = sorted(bare_names - xp_names) out[f"array-api-compat adds to {bare_ns.__name__}"] = sorted(xp_names - bare_names) print(yaml.dump(out))git checkout main python dump.py > main.txt git checkout test_all python dump.py > test_all.txt diff -c99999 main.txt test_all.txt*** main.txt 2025-06-04 12:35:57.385160311 +0100 --- test_all.txt 2025-06-04 12:35:48.335020354 +0100 *************** *** 1,628 **** array-api-compat adds to cupy: - UniqueAllResult - UniqueCountsResult - UniqueInverseResult - __array_api_version__ - __array_namespace_info__ - - _aliases - - _info - - _typing - acos - acosh - asin - asinh - astype - atan - atan2 - atanh - bitwise_invert - bitwise_left_shift - bitwise_right_shift - bool - concat - cumulative_prod - cumulative_sum - isdtype - matrix_transpose - permute_dims - pow - unique_all - unique_counts - unique_inverse - unique_values - unstack - vecdot array-api-compat adds to dask.array: - - Final - UniqueAllResult - UniqueCountsResult - UniqueInverseResult - __array_api_version__ - __array_namespace_info__ - - _aliases - - _info - acos - acosh - argsort - asin - asinh - astype - atan - atan2 - atanh - bitwise_invert - bitwise_left_shift - bitwise_right_shift - can_cast - concat - cumulative_prod - cumulative_sum - finfo - iinfo - isdtype - matrix_transpose - permute_dims - pow - sort - unique_all - unique_counts - unique_inverse - unique_values - unstack - vecdot array-api-compat adds to numpy: - - Final - UniqueAllResult - UniqueCountsResult - UniqueInverseResult - - __annotations__ - - _aliases - - _info array-api-compat adds to torch: - UniqueAllResult - UniqueCountsResult - UniqueInverseResult - __array_api_version__ - __array_namespace_info__ - - _aliases - - _info - - _typing - astype - bitwise_invert - broadcast_arrays - cumulative_prod - cumulative_sum - expand_dims - isdtype - matrix_transpose - permute_dims - repeat - take_along_axis - unique_all - unique_counts - unique_inverse - unique_values - unstack - vecdot array-api-compat hides from cupy: - __getattr__ - __version__ - _binary - _core - _creation - _cupy - _cupyx - _default_memory_pool - _default_pinned_memory_pool - _deprecated_apis - _embed_signatures - _environment - _functional - _functools - _indexing - _io - _logic - _manipulation - _math - _misc - _numpy - _padding - _sorting - _statistics - _sys - _template - _util - _version array-api-compat hides from dask.array: - - ARRAY_EXPR_ENABLED - __all__ - _array_expr_enabled - _reductions_generic - _shuffle - - annotations - - chunk - - chunk_types - - core - - creation - - dispatch - - einsumfuncs - - importlib - - numpy_compat - - optimization - - reductions - - routines - - slicing - - tiledb_io - - ufunc - - utils - - warnings - - wrap array-api-compat hides from numpy: - _CopyMode - _NoValue - __NUMPY_SETUP__ - __all__ - __config__ - __dir__ - __expired_attributes__ - __former_attrs__ - __future_scalars__ - __getattr__ - __numpy_submodules__ - _array_api_info - _core - _distributor_init - _expired_attrs_2_0 - _globals - _int_extended_msg - _mat - _msg - _pyinstaller_hooks_dir - _pytesttester - _specific_msg - _type_info - _utils array-api-compat hides from torch: - _Any - _C - _Callable - _GLOBAL_DEVICE_CONTEXT - _InputT - _Optional - _ParamSpec - _RetT - _TorchCompileInductorWrapper - _TorchCompileWrapper - _TritonLibrary - _TypeIs - _TypeVar - _Union - _VF - __all__ - __all_and_float_types - __annotations__ - __config__ - __future__ - __getattr__ - __version__ - _adaptive_avg_pool2d - _adaptive_avg_pool3d - _add_batch_dim - _add_relu - _add_relu_ - _addmm_activation - _aminmax - _amp_foreach_non_finite_check_and_unscale_ - _amp_update_scale_ - _as_tensor_fullprec - _assert - _assert_async - _assert_scalar - _assert_tensor_metadata - _awaits - _batch_norm_impl_index - _cast_Byte - _cast_Char - _cast_Double - _cast_Float - _cast_Half - _cast_Int - _cast_Long - _cast_Short - _check - _check_index - _check_is_size - _check_not_implemented - _check_tensor_all - _check_tensor_all_with - _check_type - _check_value - _check_with - _choose_qparams_per_tensor - _chunk_cat - _classes - _coalesce - _compile - _compute_linear_combination - _conj - _conj_copy - _conj_physical - _constrain_as_size - _convert_indices_from_coo_to_csr - _convert_indices_from_csr_to_coo - _convert_weight_to_int4pack - _convert_weight_to_int4pack_for_cpu - _convolution - _convolution_mode - _copy_from - _copy_from_and_resize - _cslt_compress - _cslt_sparse_mm - _cslt_sparse_mm_search - _ctc_loss - _cudnn_ctc_loss - _cudnn_init_dropout_state - _cudnn_rnn - _cudnn_rnn_flatten_weight - _cufft_clear_plan_cache - _cufft_get_plan_cache_max_size - _cufft_get_plan_cache_size - _cufft_set_plan_cache_max_size - _cummax_helper - _cummin_helper - _custom_op - _custom_ops - _debug_has_internal_overlap - _decomp - _deprecated_attrs - _dim_arange - _dirichlet_grad - _disable_dynamo - _disable_functionalization - _dispatch - _dyn_quant_matmul_4bit - _dyn_quant_pack_4bit_weight - _efficientzerotensor - _embedding_bag - _embedding_bag_forward_only - _empty_affine_quantized - _empty_per_channel_affine_quantized - _enable_functionalization - _environment - _euclidean_dist - _export - _fake_quantize_learnable_per_channel_affine - _fake_quantize_learnable_per_tensor_affine - _fake_quantize_per_tensor_affine_cachemask_tensor_qparams - _fft_c2c - _fft_c2r - _fft_r2c - _fill_mem_eff_dropout_mask_ - _foobar - _foreach_abs - _foreach_abs_ - _foreach_acos - _foreach_acos_ - _foreach_add - _foreach_add_ - _foreach_addcdiv - _foreach_addcdiv_ - _foreach_addcmul - _foreach_addcmul_ - _foreach_asin - _foreach_asin_ - _foreach_atan - _foreach_atan_ - _foreach_ceil - _foreach_ceil_ - _foreach_clamp_max - _foreach_clamp_max_ - _foreach_clamp_min - _foreach_clamp_min_ - _foreach_copy_ - _foreach_cos - _foreach_cos_ - _foreach_cosh - _foreach_cosh_ - _foreach_div - _foreach_div_ - _foreach_erf - _foreach_erf_ - _foreach_erfc - _foreach_erfc_ - _foreach_exp - _foreach_exp_ - _foreach_expm1 - _foreach_expm1_ - _foreach_floor - _foreach_floor_ - _foreach_frac - _foreach_frac_ - _foreach_lerp - _foreach_lerp_ - _foreach_lgamma - _foreach_lgamma_ - _foreach_log - _foreach_log10 - _foreach_log10_ - _foreach_log1p - _foreach_log1p_ - _foreach_log2 - _foreach_log2_ - _foreach_log_ - _foreach_max - _foreach_maximum - _foreach_maximum_ - _foreach_minimum - _foreach_minimum_ - _foreach_mul - _foreach_mul_ - _foreach_neg - _foreach_neg_ - _foreach_norm - _foreach_pow - _foreach_pow_ - _foreach_reciprocal - _foreach_reciprocal_ - _foreach_round - _foreach_round_ - _foreach_rsqrt - _foreach_rsqrt_ - _foreach_sigmoid - _foreach_sigmoid_ - _foreach_sign - _foreach_sign_ - _foreach_sin - _foreach_sin_ - _foreach_sinh - _foreach_sinh_ - _foreach_sqrt - _foreach_sqrt_ - _foreach_sub - _foreach_sub_ - _foreach_tan - _foreach_tan_ - _foreach_tanh - _foreach_tanh_ - _foreach_trunc - _foreach_trunc_ - _foreach_zero_ - _freeze_functional_tensor - _from_functional_tensor - _functional_assert_async - _functional_assert_scalar - _functional_sym_constrain_range - _functional_sym_constrain_range_for_size - _functionalize_apply_view_metas - _functionalize_are_all_mutations_hidden_from_autograd - _functionalize_are_all_mutations_under_no_grad_or_inference_mode - _functionalize_commit_update - _functionalize_enable_reapply_views - _functionalize_get_storage_size - _functionalize_has_data_mutation - _functionalize_has_metadata_mutation - _functionalize_is_multi_output_view - _functionalize_is_symbolic - _functionalize_mark_mutation_hidden_from_autograd - _functionalize_replace - _functionalize_set_storage_changed - _functionalize_sync - _functionalize_unsafe_set - _functionalize_was_inductor_storage_resized - _functionalize_was_storage_changed - _functorch - _fused_adagrad_ - _fused_adam_ - _fused_adamw_ - _fused_dropout - _fused_moving_avg_obs_fq_helper - _fused_sdp_choice - _fused_sgd_ - _fw_primal_copy - _get_cuda_dep_paths - _get_origin - _grid_sampler_2d_cpu_fallback - _guards - _has_compatible_shallow_copy_type - _higher_order_ops - _histogramdd_bin_edges - _histogramdd_from_bin_cts - _histogramdd_from_bin_tensors - _import_device_backends - _import_dotted_name - _index_put_impl_ - _indices_copy - _initExtension - _int_mm - _is_all_true - _is_any_true - _is_device_backend_autoload_enabled - _is_functional_tensor - _is_functional_tensor_base - _is_zerotensor - _jit_internal - _lazy_clone - _lazy_modules - _library - _linalg_check_errors - _linalg_det - _linalg_eigh - _linalg_slogdet - _linalg_solve_ex - _linalg_svd - _linalg_utils - _load_global_deps - _lobpcg - _log_softmax - _log_softmax_backward_data - _logcumsumexp - _logging - _lowrank - _lstm_mps - _lu_with_info - _make_dep_token - _make_dual - _make_dual_copy - _make_per_channel_quantized_tensor - _make_per_tensor_quantized_tensor - _masked_scale - _masked_softmax - _meta_registrations - _mirror_autograd_meta_to - _mixed_dtypes_linear - _mkldnn - _mkldnn_reshape - _mkldnn_transpose - _mkldnn_transpose_ - _mps_convolution - _mps_convolution_transpose - _namedtensor_internals - _native_batch_norm_legit - _native_batch_norm_legit_no_training - _native_multi_head_attention - _neg_view - _neg_view_copy - _nested_compute_contiguous_strides_offsets - _nested_from_padded - _nested_from_padded_and_nested_example - _nested_from_padded_tensor - _nested_get_jagged_dummy - _nested_get_lengths - _nested_get_max_seqlen - _nested_get_min_seqlen - _nested_get_offsets - _nested_get_ragged_idx - _nested_get_values - _nested_get_values_copy - _nested_tensor_from_mask - _nested_tensor_from_mask_left_aligned - _nested_tensor_from_tensor_list - _nested_tensor_softmax_with_shape - _nested_view_from_buffer - _nested_view_from_buffer_copy - _nested_view_from_jagged - _nested_view_from_jagged_copy - _nnpack_available - _nnpack_spatial_convolution - _ops - _overload - _pack_padded_sequence - _pad_packed_sequence - _pin_memory - _preload_cuda_deps - _prelu_kernel - _prims - _prims_common - _print - _propagate_xla_data - _refs - _register_device_module - _remove_batch_dim - _reshape_alias_copy - _reshape_from_tensor - _resize_output_ - _rowwise_prune - _running_with_deploy - _safe_softmax - _sample_dirichlet - _saturate_weight_to_fp16 - _scaled_dot_product_attention_math - _scaled_dot_product_attention_math_for_mps - _scaled_dot_product_cudnn_attention - _scaled_dot_product_efficient_attention - _scaled_dot_product_flash_attention - _scaled_dot_product_flash_attention_for_cpu - _scaled_grouped_mm - _scaled_mm - _segment_reduce - _shape_as_tensor - _sobol_engine_draw - _sobol_engine_ff_ - _sobol_engine_initialize_state_ - _sobol_engine_scramble_ - _softmax - _softmax_backward_data - _sources - _sparse_broadcast_to - _sparse_broadcast_to_copy - _sparse_csr_prod - _sparse_csr_sum - _sparse_log_softmax_backward_data - _sparse_semi_structured_addmm - _sparse_semi_structured_apply - _sparse_semi_structured_apply_dense - _sparse_semi_structured_linear - _sparse_semi_structured_mm - _sparse_semi_structured_tile - _sparse_softmax_backward_data - _sparse_sparse_matmul - _sparse_sum - _stack - _standard_gamma - _standard_gamma_grad - _storage_classes - _strobelight - _subclasses - _sym_acos - _sym_asin - _sym_atan - _sym_cos - _sym_cosh - _sym_log2 - _sym_sin - _sym_sinh - _sym_sqrt - _sym_tan - _sym_tanh - _sync - _tensor - _tensor_classes - _tensor_str - _test_autograd_multiple_dispatch - _test_autograd_multiple_dispatch_view - _test_autograd_multiple_dispatch_view_copy - _test_check_tensor - _test_functorch_fallback - _test_parallel_materialize - _test_serialization_subcmul - _to_cpu - _to_functional_tensor - _to_sparse_semi_structured - _transform_bias_rescale_qkv - _transformer_encoder_layer_fwd - _trilinear - _triton_multi_head_attention - _triton_scaled_dot_attention - _unique - _unique2 - _unpack_dual - _unsafe_index - _unsafe_index_put - _unsafe_masked_index - _unsafe_masked_index_put_accumulate - _use_cudnn_ctc_loss - _use_cudnn_rnn_flatten_weight - _utils - _utils_internal - _validate_compressed_sparse_indices - _validate_sparse_bsc_tensor_args - _validate_sparse_bsr_tensor_args - _validate_sparse_compressed_tensor_args - _validate_sparse_coo_tensor_args - _validate_sparse_csc_tensor_args - _validate_sparse_csr_tensor_args - _values_copy - _vendor - _vmap_internals - _warn_typed_storage_removal - _weight_int4pack_mm - _weight_int4pack_mm_for_cpu - _weight_int8pack_mm - _weight_norm - _weight_norm_interface - _weights_only_unpickler - _wrapped_linear_prepack - _wrapped_quantized_linear_prepacked - - cpu - - cuda array-api-compat adds to cupy: - UniqueAllResult - UniqueCountsResult - UniqueInverseResult - __array_api_version__ - __array_namespace_info__ - acos - acosh - asin - asinh - astype - atan - atan2 - atanh - bitwise_invert - bitwise_left_shift - bitwise_right_shift - bool - concat - cumulative_prod - cumulative_sum - isdtype - matrix_transpose - permute_dims - pow - unique_all - unique_counts - unique_inverse - unique_values - unstack - vecdot array-api-compat adds to dask.array: - UniqueAllResult - UniqueCountsResult - UniqueInverseResult - __array_api_version__ - __array_namespace_info__ - acos - acosh - argsort - asin - asinh - astype - atan - atan2 - atanh - bitwise_invert - bitwise_left_shift - bitwise_right_shift - can_cast - concat - cumulative_prod - cumulative_sum - finfo - iinfo - isdtype - matrix_transpose - permute_dims - pow - sort - unique_all - unique_counts - unique_inverse - unique_values - unstack - vecdot array-api-compat adds to numpy: - UniqueAllResult - UniqueCountsResult - UniqueInverseResult array-api-compat adds to torch: - UniqueAllResult - UniqueCountsResult - UniqueInverseResult - __array_api_version__ - __array_namespace_info__ - astype - bitwise_invert - broadcast_arrays - cumulative_prod - cumulative_sum - expand_dims - isdtype - matrix_transpose - permute_dims - repeat - take_along_axis - unique_all - unique_counts - unique_inverse - unique_values - unstack - vecdot array-api-compat hides from cupy: + - __builtins__ + - __cached__ + - __doc__ + - __file__ - __getattr__ + - __loader__ + - __name__ + - __package__ + - __path__ + - __spec__ - __version__ - _binary - _core - _creation - _cupy - _cupyx - _default_memory_pool - _default_pinned_memory_pool - _deprecated_apis - _embed_signatures - _environment - _functional - _functools - _indexing - _io - _logic - _manipulation - _math - _misc - _numpy - _padding - _sorting - _statistics - _sys - _template - _util - _version array-api-compat hides from dask.array: - __all__ + - __annotations__ + - __cached__ + - __doc__ + - __file__ + - __loader__ + - __name__ + - __package__ + - __path__ + - __spec__ - _array_expr_enabled - _reductions_generic - _shuffle array-api-compat hides from numpy: - _CopyMode - _NoValue - __NUMPY_SETUP__ - __all__ + - __cached__ - __config__ - __dir__ + - __doc__ - __expired_attributes__ + - __file__ - __former_attrs__ - __future_scalars__ - __getattr__ + - __loader__ + - __name__ - __numpy_submodules__ + - __package__ + - __path__ + - __spec__ - _array_api_info - _core - _distributor_init - _expired_attrs_2_0 - _globals - _int_extended_msg - _mat - _msg - _pyinstaller_hooks_dir - _pytesttester - _specific_msg - _type_info + - _typing - _utils array-api-compat hides from torch: - _Any - _C - _Callable - _GLOBAL_DEVICE_CONTEXT - _InputT - _Optional - _ParamSpec - _RetT - _TorchCompileInductorWrapper - _TorchCompileWrapper - _TritonLibrary - _TypeIs - _TypeVar - _Union - _VF - __all__ - __all_and_float_types - __annotations__ + - __cached__ - __config__ + - __doc__ + - __file__ - __future__ - __getattr__ + - __loader__ + - __name__ + - __package__ + - __path__ + - __spec__ - __version__ - _adaptive_avg_pool2d - _adaptive_avg_pool3d - _add_batch_dim - _add_relu - _add_relu_ - _addmm_activation - _aminmax - _amp_foreach_non_finite_check_and_unscale_ - _amp_update_scale_ - _as_tensor_fullprec - _assert - _assert_async - _assert_scalar - _assert_tensor_metadata - _awaits - _batch_norm_impl_index - _cast_Byte - _cast_Char - _cast_Double - _cast_Float - _cast_Half - _cast_Int - _cast_Long - _cast_Short - _check - _check_index - _check_is_size - _check_not_implemented - _check_tensor_all - _check_tensor_all_with - _check_type - _check_value - _check_with - _choose_qparams_per_tensor - _chunk_cat - _classes - _coalesce - _compile - _compute_linear_combination - _conj - _conj_copy - _conj_physical - _constrain_as_size - _convert_indices_from_coo_to_csr - _convert_indices_from_csr_to_coo - _convert_weight_to_int4pack - _convert_weight_to_int4pack_for_cpu - _convolution - _convolution_mode - _copy_from - _copy_from_and_resize - _cslt_compress - _cslt_sparse_mm - _cslt_sparse_mm_search - _ctc_loss - _cudnn_ctc_loss - _cudnn_init_dropout_state - _cudnn_rnn - _cudnn_rnn_flatten_weight - _cufft_clear_plan_cache - _cufft_get_plan_cache_max_size - _cufft_get_plan_cache_size - _cufft_set_plan_cache_max_size - _cummax_helper - _cummin_helper - _custom_op - _custom_ops - _debug_has_internal_overlap - _decomp - _deprecated_attrs - _dim_arange - _dirichlet_grad - _disable_dynamo - _disable_functionalization - _dispatch - _dyn_quant_matmul_4bit - _dyn_quant_pack_4bit_weight - _efficientzerotensor - _embedding_bag - _embedding_bag_forward_only - _empty_affine_quantized - _empty_per_channel_affine_quantized - _enable_functionalization - _environment - _euclidean_dist - _export - _fake_quantize_learnable_per_channel_affine - _fake_quantize_learnable_per_tensor_affine - _fake_quantize_per_tensor_affine_cachemask_tensor_qparams - _fft_c2c - _fft_c2r - _fft_r2c - _fill_mem_eff_dropout_mask_ - _foobar - _foreach_abs - _foreach_abs_ - _foreach_acos - _foreach_acos_ - _foreach_add - _foreach_add_ - _foreach_addcdiv - _foreach_addcdiv_ - _foreach_addcmul - _foreach_addcmul_ - _foreach_asin - _foreach_asin_ - _foreach_atan - _foreach_atan_ - _foreach_ceil - _foreach_ceil_ - _foreach_clamp_max - _foreach_clamp_max_ - _foreach_clamp_min - _foreach_clamp_min_ - _foreach_copy_ - _foreach_cos - _foreach_cos_ - _foreach_cosh - _foreach_cosh_ - _foreach_div - _foreach_div_ - _foreach_erf - _foreach_erf_ - _foreach_erfc - _foreach_erfc_ - _foreach_exp - _foreach_exp_ - _foreach_expm1 - _foreach_expm1_ - _foreach_floor - _foreach_floor_ - _foreach_frac - _foreach_frac_ - _foreach_lerp - _foreach_lerp_ - _foreach_lgamma - _foreach_lgamma_ - _foreach_log - _foreach_log10 - _foreach_log10_ - _foreach_log1p - _foreach_log1p_ - _foreach_log2 - _foreach_log2_ - _foreach_log_ - _foreach_max - _foreach_maximum - _foreach_maximum_ - _foreach_minimum - _foreach_minimum_ - _foreach_mul - _foreach_mul_ - _foreach_neg - _foreach_neg_ - _foreach_norm - _foreach_pow - _foreach_pow_ - _foreach_reciprocal - _foreach_reciprocal_ - _foreach_round - _foreach_round_ - _foreach_rsqrt - _foreach_rsqrt_ - _foreach_sigmoid - _foreach_sigmoid_ - _foreach_sign - _foreach_sign_ - _foreach_sin - _foreach_sin_ - _foreach_sinh - _foreach_sinh_ - _foreach_sqrt - _foreach_sqrt_ - _foreach_sub - _foreach_sub_ - _foreach_tan - _foreach_tan_ - _foreach_tanh - _foreach_tanh_ - _foreach_trunc - _foreach_trunc_ - _foreach_zero_ - _freeze_functional_tensor - _from_functional_tensor - _functional_assert_async - _functional_assert_scalar - _functional_sym_constrain_range - _functional_sym_constrain_range_for_size - _functionalize_apply_view_metas - _functionalize_are_all_mutations_hidden_from_autograd - _functionalize_are_all_mutations_under_no_grad_or_inference_mode - _functionalize_commit_update - _functionalize_enable_reapply_views - _functionalize_get_storage_size - _functionalize_has_data_mutation - _functionalize_has_metadata_mutation - _functionalize_is_multi_output_view - _functionalize_is_symbolic - _functionalize_mark_mutation_hidden_from_autograd - _functionalize_replace - _functionalize_set_storage_changed - _functionalize_sync - _functionalize_unsafe_set - _functionalize_was_inductor_storage_resized - _functionalize_was_storage_changed - _functorch - _fused_adagrad_ - _fused_adam_ - _fused_adamw_ - _fused_dropout - _fused_moving_avg_obs_fq_helper - _fused_sdp_choice - _fused_sgd_ - _fw_primal_copy - _get_cuda_dep_paths - _get_origin - _grid_sampler_2d_cpu_fallback - _guards - _has_compatible_shallow_copy_type - _higher_order_ops - _histogramdd_bin_edges - _histogramdd_from_bin_cts - _histogramdd_from_bin_tensors - _import_device_backends - _import_dotted_name - _index_put_impl_ - _indices_copy - _initExtension - _int_mm - _is_all_true - _is_any_true - _is_device_backend_autoload_enabled - _is_functional_tensor - _is_functional_tensor_base - _is_zerotensor - _jit_internal - _lazy_clone - _lazy_modules - _library - _linalg_check_errors - _linalg_det - _linalg_eigh - _linalg_slogdet - _linalg_solve_ex - _linalg_svd - _linalg_utils - _load_global_deps - _lobpcg - _log_softmax - _log_softmax_backward_data - _logcumsumexp - _logging - _lowrank - _lstm_mps - _lu_with_info - _make_dep_token - _make_dual - _make_dual_copy - _make_per_channel_quantized_tensor - _make_per_tensor_quantized_tensor - _masked_scale - _masked_softmax - _meta_registrations - _mirror_autograd_meta_to - _mixed_dtypes_linear - _mkldnn - _mkldnn_reshape - _mkldnn_transpose - _mkldnn_transpose_ - _mps_convolution - _mps_convolution_transpose - _namedtensor_internals - _native_batch_norm_legit - _native_batch_norm_legit_no_training - _native_multi_head_attention - _neg_view - _neg_view_copy - _nested_compute_contiguous_strides_offsets - _nested_from_padded - _nested_from_padded_and_nested_example - _nested_from_padded_tensor - _nested_get_jagged_dummy - _nested_get_lengths - _nested_get_max_seqlen - _nested_get_min_seqlen - _nested_get_offsets - _nested_get_ragged_idx - _nested_get_values - _nested_get_values_copy - _nested_tensor_from_mask - _nested_tensor_from_mask_left_aligned - _nested_tensor_from_tensor_list - _nested_tensor_softmax_with_shape - _nested_view_from_buffer - _nested_view_from_buffer_copy - _nested_view_from_jagged - _nested_view_from_jagged_copy - _nnpack_available - _nnpack_spatial_convolution - _ops - _overload - _pack_padded_sequence - _pad_packed_sequence - _pin_memory - _preload_cuda_deps - _prelu_kernel - _prims - _prims_common - _print - _propagate_xla_data - _refs - _register_device_module - _remove_batch_dim - _reshape_alias_copy - _reshape_from_tensor - _resize_output_ - _rowwise_prune - _running_with_deploy - _safe_softmax - _sample_dirichlet - _saturate_weight_to_fp16 - _scaled_dot_product_attention_math - _scaled_dot_product_attention_math_for_mps - _scaled_dot_product_cudnn_attention - _scaled_dot_product_efficient_attention - _scaled_dot_product_flash_attention - _scaled_dot_product_flash_attention_for_cpu - _scaled_grouped_mm - _scaled_mm - _segment_reduce - _shape_as_tensor - _sobol_engine_draw - _sobol_engine_ff_ - _sobol_engine_initialize_state_ - _sobol_engine_scramble_ - _softmax - _softmax_backward_data - _sources - _sparse_broadcast_to - _sparse_broadcast_to_copy - _sparse_csr_prod - _sparse_csr_sum - _sparse_log_softmax_backward_data - _sparse_semi_structured_addmm - _sparse_semi_structured_apply - _sparse_semi_structured_apply_dense - _sparse_semi_structured_linear - _sparse_semi_structured_mm - _sparse_semi_structured_tile - _sparse_softmax_backward_data - _sparse_sparse_matmul - _sparse_sum - _stack - _standard_gamma - _standard_gamma_grad - _storage_classes - _strobelight - _subclasses - _sym_acos - _sym_asin - _sym_atan - _sym_cos - _sym_cosh - _sym_log2 - _sym_sin - _sym_sinh - _sym_sqrt - _sym_tan - _sym_tanh - _sync - _tensor - _tensor_classes - _tensor_str - _test_autograd_multiple_dispatch - _test_autograd_multiple_dispatch_view - _test_autograd_multiple_dispatch_view_copy - _test_check_tensor - _test_functorch_fallback - _test_parallel_materialize - _test_serialization_subcmul - _to_cpu - _to_functional_tensor - _to_sparse_semi_structured - _transform_bias_rescale_qkv - _transformer_encoder_layer_fwd - _trilinear - _triton_multi_head_attention - _triton_scaled_dot_attention - _unique - _unique2 - _unpack_dual - _unsafe_index - _unsafe_index_put - _unsafe_masked_index - _unsafe_masked_index_put_accumulate - _use_cudnn_ctc_loss - _use_cudnn_rnn_flatten_weight - _utils - _utils_internal - _validate_compressed_sparse_indices - _validate_sparse_bsc_tensor_args - _validate_sparse_bsr_tensor_args - _validate_sparse_compressed_tensor_args - _validate_sparse_coo_tensor_args - _validate_sparse_csc_tensor_args - _validate_sparse_csr_tensor_args - _values_copy - _vendor - _vmap_internals - _warn_typed_storage_removal - _weight_int4pack_mm - _weight_int4pack_mm_for_cpu - _weight_int8pack_mm - _weight_norm - _weight_norm_interface - _weights_only_unpickler - _wrapped_linear_prepack - _wrapped_quantized_linear_prepacked |
| Okay, thanks. So, if I read this right, as compared to main this PR
Extra symbols would be nice to hide, and previously the package worked quite a lot to hide them. It's a nice-to-have though.
|
No, it's the other way around. array-api-compat hides from dask.array: - - ARRAY_EXPR_ENABLED - __all__ - _array_expr_enabled - _reductions_generic - _shuffle - - annotations - - chunk - - chunk_types - - core - - creation - - dispatch - - einsumfuncs - - importlib - - numpy_compat - - optimization - - reductions - - routines - - slicing - - tiledb_io - - ufunc - - utils - - warnings - - wrapIt no longer hides these public symbols from torch: - - cpu - - cudaIt starts hiding a handful extra private symbols of no importance. array-api-compat hides from cupy: + - __builtins__ + - __cached__ + - __doc__ + - __file__ - __getattr__ + - __loader__ + - __name__ + - __package__ + - __path__ + - __spec__ [...] array-api-compat hides from dask.array: - __all__ + - __annotations__ + - __cached__ + - __doc__ + - __file__ + - __loader__ + - __name__ + - __package__ + - __path__ + - __spec__ - _array_expr_enabled - _reductions_generic - _shuffle array-api-compat hides from numpy: - _CopyMode - _NoValue - __NUMPY_SETUP__ - __all__ + - __cached__ - __config__ - __dir__ + - __doc__ - __expired_attributes__ + - __file__ - __former_attrs__ - __future_scalars__ - __getattr__ + - __loader__ + - __name__ - __numpy_submodules__ + - __package__ + - __path__ + - __spec__ [...] + - _typing - _utils array-api-compat hides from torch: [...] + - __cached__ - __config__ + - __doc__ + - __file__ - __future__ - __getattr__ + - __loader__ + - __name__ + - __package__ + - __path__ + - __spec__ |
test_alltest_all | Great, thanks. I misread then. What still exists is hiding private symbols (see below for torch). Would it be difficult to remove the filter and pass through whatever the library has in its |
It's non-trivial, because right now |
| One problem is it is a regression. On main (unless I'm being dense again): Or are you saying it brings the status quo back to one of previous versions? EDIT: Nevermind, I am being dense. Here are all these private functions, safely hidden, also on main. |
| Okay, lets merge this and see about exporting private items separately (Checked it: they have been hidden from at least 1.9.1; thus it's rather low prio; let's wait for if and when it becomes a problem). Thanks @crusaderky |
#288 introduced
__dir__, which completely neuteredtest_all.Instead of reverting the change, this PR attempts to reinvent the test to be more useful.
CC @jorenham @ev-br