Deprecate function arguments

Based on this ticket I raised on mypy: Support for deprecating function arguments · Issue #19817 · python/mypy · GitHub

(While the below names mypy, this extends to other type checkers. )

Feature

Currently we can deprecate functions, classes, methods, (modules would be nice too?), etc. There should be a clean way to support deprecation of certain function arguments/parameters in a way that the type checker can catch their misuse accordingly.

Pitch

For a function foo which may have many parameters, and for an up coming release we decide one (e.g. bar) is redundant and should be deprecated for some reason (support will be dropped, or another parameter is a better hook for the desired behaviour, etc). Currently I have to write something along the lines of:

def foo(bar: int | None = None): if bar is not None: warnings.warn("The use of bar is discouraged for some good reason", DeprecationWarning) ... 

However, the caller of this function might well be calling foo with either foo(bar=None) or foo(bar=42), when really we would want in both cases the type checker to warn us that bar is deprecated.

I cannot see any support for this in the documentation.

I think there is a way we could do this using overload and dispatch (similar to https://stackoverflow.com/a/73478371/5134817), but I don’t think this solution is clean.

Instead I am imagining something along the lines of the behaviour of auto when using enum class, where we can write.

def foo(bar: int | None = deprecated_arg(None, "Don't use bar for some good reason")): ... x = foo() # No mypy warning. x = foo(bar=None) # mypy Warning. x = foo(bar=42) # mypy Warning. from functools import partial sneaky = partial(foo, bar=42) # mypy Warning. 

Some alternative syntaxes could be:

def foo(bar: Deprecated[int | None, "Don't use bar for some good reason"]): ... # NB - No default value. @deprecate_arg(bar="Don't use bar for some good reason") # Looks similar to functools.partial syntac. def foo(bar: int | None): ... 

Of these two alternatives, I don’t especially like the decorator approach because it requires writing the argument out twice, so it is easy for this to go out of sync or be a small extra maintenance burden).

Note - I think deprecating certain support for types could be achieved using the type annotation, but imagine this would be a much more complex problem. E.g.

def foo(bar: int | Deprecated[None, "Don't use bar=None for some good reason"] = None): ... foo(bar=42) # no mypy warning. foo(bar=None) # mypy Warning. foo() # mypy Warning. 

static or runtime warnings?

In these examples, I mention mypy warnings, meaning static warnings issued by the type checker. I don’t wish to conflate these with runtime warnings. Both I think are important, and perhaps there is room for a solution that supports both use cases. But for now I am mostly concerned with achieving the goal with static type checkers.

Difficulties with using overload or dispatch?

I think there are some difficulties with this:

  • There is not much in the way of documentation. (I think I would be the first to combine this in the desired way, which I find surprising and unlikely).

  • I am not sure this would scale to the case where I had multiple possible types and multiple deprecated arguments.

To elaborate more on the second point, it seems to me that it should be doable for very simple functions, but for non-trivial functions, this might not scale easily. Consider for example:

def big_function( arg1: int | str, arg2: int | str | None = None arg3: float | int | None = None, # <- Let's deprecate this arg4: float | int | None = None, # <- Let's deprecate this also *args, kwarg1: int | str, kwarg2: int | str | None = None kwarg3: float | int | None = None, # <- Let's deprecate this too kwarg4: float | int | None = None, # <- Let's deprecate this as well *kwargs): ... 

I am trying to imagine a clean way to do this with overload, but it seems to me I will have to construct a huge cross product of all possible combination types.

2 Likes

AFAICT this usecases is fully covered by overload and deprecated and even mentioned in the documentation for the latter. Yes, this doesn’t scale well to big functions, but that’s a general problem with overload - I don’t think creating a specific solution for this pair of decorators is worth it, instead effort to a general solution should be done.

1 Like

Coincidentally I was just talking about how strange it is that @deprecated is the only decorator that applies to individual overloads.THe natural solution to this inconsistency that came to mind was indeed something like this.

I don’t think that it should be on the “value side” of things, but rather on the typing side, and think that Deprecated is the way to go here. An important advantage of this is the ability to deprecate constants, attributes, type aliases, etc. It would also allow you to deprecate an entire function (instead of specific parameters) by wrapping the return type in a def foo() -> Deprecated[T, "use spam() instead"]: ....

Perhaps I am missing the part of the documentation here, but deprecated only supports functions, classes, and methods, but fine grained argument deprecation is not supported.

Yes, this doesn’t scale well to big functions, but that’s a general problem with overload - I don’t think creating a specific solution for this pair of decorators is worth it, instead effort to a general solution should be done.

I agree that while a solution can be done using these for very small cases, this is fundamentally limited in how it can scale, and trying to pursue a solution through the combination of these mechanisms doesn’t seem like the way forward.

Deprecated is the way to go here

Syntax wise I think this has a few advantages, although I have zero idea about how this would be implemented, and how “sticky” it would be when used in conjunction with Union and other constructs.

It would also allow you to deprecate an entire function (instead of specific parameters) by wrapping the return type

That seems like slightly re-inventing the wheel, but perhaps there is apettite for it. It does though certainly hightlight a situation I had not considered, which is deprecating the return of a single object or part of an object.

Consider compound objects being returned:

def foo() -> tuple[str, int, Deprecated[float, "Floats will not be returned soon"]]: # which should this warn on? return '1', 1, 1.0 return '2', 2 x,y = foo() # Should this warn? x,y,z = foo() # Should this warn? 

Consider a variety of return types

def foo() -> int | str | Deprecated[float, "Support removed"]: return 1 # fine return 1.0 # warns. i: int = foo() f: float = foo() # warns 

Consider a generic Type (where for convenience we deprecate the input argument, but the same could be done for the output).

def foo[T](x: T | Deprecated[float]) -> T: ... # or def foo[T | Deprecated[float]](x: T) -> T: 

In summary, I think the Deprecated syntax is probably the most general, but also by merit of that it encounters a large number of corner cases. Contrast this to a very specialised clearly defined deprecate_arg, which while less generic, perhaps does not open up such a large can or worms.

Deprecating specific parameters requires additional overloads. With a Deprecated those would no longer be needed. The number of additional overloads required scales exponentially in the amount of deprecated optional parameters.

There are many examples of this in scipy-stubs, for instance scipy.optimize.fmin_l_bfgs_b (source):

@overload # no args, no fprime, no approx_grad def fmin_l_bfgs_b( func: _Fn[_ToFloatAnd1D], x0: _ToFloatOr1D, fprime: None = None, args: tuple[()] = (), approx_grad: onp.ToFalse = 0, bounds: _Bounds | None = None, m: onp.ToJustInt = 10, factr: onp.ToFloat = 1e7, pgtol: onp.ToFloat = 1e-5, epsilon: onp.ToFloat = 1e-8, iprint: _NoValueType = ..., maxfun: onp.ToJustInt = 15_000, maxiter: onp.ToJustInt = 15_000, disp: _NoValueType = ..., callback: _Fn[_Ignored] | None = None, maxls: onp.ToJustInt = 20, ) -> _FMinResult: ... @overload # args, no fprime, no approx_grad def fmin_l_bfgs_b( func: _Fn[_ToFloatAnd1D, *_Ts], x0: _ToFloatOr1D, fprime: None = None, args: tuple[*_Ts] = ..., approx_grad: onp.ToFalse = 0, bounds: _Bounds | None = None, m: onp.ToJustInt = 10, factr: onp.ToFloat = 1e7, pgtol: onp.ToFloat = 1e-5, epsilon: onp.ToFloat = 1e-8, iprint: _NoValueType = ..., maxfun: onp.ToJustInt = 15_000, maxiter: onp.ToJustInt = 15_000, disp: _NoValueType = ..., callback: _Fn[_Ignored] | None = None, maxls: onp.ToJustInt = 20, ) -> _FMinResult: ... @overload # fprime, no approx_grad def fmin_l_bfgs_b( func: _Fn[onp.ToFloat, *_Ts], x0: _ToFloatOr1D, fprime: _Fn[onp.ToFloat1D, *_Ts], args: tuple[*_Ts] = ..., approx_grad: onp.ToFalse = 0, bounds: _Bounds | None = None, m: onp.ToJustInt = 10, factr: onp.ToFloat = 1e7, pgtol: onp.ToFloat = 1e-5, epsilon: onp.ToFloat = 1e-8, iprint: _NoValueType = ..., maxfun: onp.ToJustInt = 15_000, maxiter: onp.ToJustInt = 15_000, disp: _NoValueType = ..., callback: _Fn[_Ignored] | None = None, maxls: onp.ToJustInt = 20, ) -> _FMinResult: ... @overload # no fprime, approx_grad (keyword) def fmin_l_bfgs_b( func: _Fn[onp.ToFloat, *_Ts], x0: _ToFloatOr1D, fprime: None = None, args: tuple[*_Ts] = ..., *, approx_grad: onp.ToTrue, bounds: _Bounds | None = None, m: onp.ToJustInt = 10, factr: onp.ToFloat = 1e7, pgtol: onp.ToFloat = 1e-5, epsilon: onp.ToFloat = 1e-8, iprint: _NoValueType = ..., maxfun: onp.ToJustInt = 15_000, maxiter: onp.ToJustInt = 15_000, disp: _NoValueType = ..., callback: _Fn[_Ignored] | None = None, maxls: onp.ToJustInt = 20, ) -> _FMinResult: ... @overload # no fprime, unknown approx_grad (keyword) def fmin_l_bfgs_b( func: _Fn[onp.ToFloat, *_Ts] | _Fn[_ToFloatAnd1D, *_Ts], x0: _ToFloatOr1D, fprime: None = None, args: tuple[*_Ts] = ..., *, approx_grad: onp.ToBool, bounds: _Bounds | None = None, m: onp.ToJustInt = 10, factr: onp.ToFloat = 1e7, pgtol: onp.ToFloat = 1e-5, epsilon: onp.ToFloat = 1e-8, iprint: _NoValueType = ..., maxfun: onp.ToJustInt = 15_000, maxiter: onp.ToJustInt = 15_000, disp: _NoValueType = ..., callback: _Fn[_Ignored] | None = None, maxls: onp.ToJustInt = 20, ) -> _FMinResult: ... @overload # iprint @deprecated("The `iprint` keyword is deprecated and will be removed from SciPy 1.18.0.") def fmin_l_bfgs_b( func: _Fn[onp.ToFloat, *_Ts] | _Fn[_ToFloatAnd1D, *_Ts], x0: _ToFloatOr1D, fprime: _Fn[onp.ToFloat1D, *_Ts] | None = None, args: tuple[*_Ts] = ..., approx_grad: onp.ToBool = 0, bounds: _Bounds | None = None, m: onp.ToJustInt = 10, factr: onp.ToFloat = 1e7, pgtol: onp.ToFloat = 1e-5, epsilon: onp.ToFloat = 1e-8, *, iprint: int, maxfun: onp.ToJustInt = 15_000, maxiter: onp.ToJustInt = 15_000, disp: _NoValueType = ..., callback: _Fn[_Ignored] | None = None, maxls: onp.ToJustInt = 20, ) -> _FMinResult: ... @overload # disp @deprecated("The `disp` keyword is deprecated and will be removed from SciPy 1.18.0.") def fmin_l_bfgs_b( func: _Fn[onp.ToFloat, *_Ts] | _Fn[_ToFloatAnd1D, *_Ts], x0: _ToFloatOr1D, fprime: _Fn[onp.ToFloat1D, *_Ts] | None = None, args: tuple[*_Ts] = ..., approx_grad: onp.ToBool = 0, bounds: _Bounds | None = None, m: onp.ToJustInt = 10, factr: onp.ToFloat = 1e7, pgtol: onp.ToFloat = 1e-5, epsilon: onp.ToFloat = 1e-8, iprint: _NoValueType = ..., maxfun: onp.ToJustInt = 15_000, maxiter: onp.ToJustInt = 15_000, *, disp: int, callback: _Fn[_Ignored] | None = None, maxls: onp.ToJustInt = 20, ) -> _FMinResult: ... @overload # iprint and disp @deprecated("The `iprint` and `disp` keywords are deprecated and will be removed from SciPy 1.18.0.") def fmin_l_bfgs_b( func: _Fn[onp.ToFloat, *_Ts] | _Fn[_ToFloatAnd1D, *_Ts], x0: _ToFloatOr1D, fprime: _Fn[onp.ToFloat1D, *_Ts] | None = None, args: tuple[*_Ts] = ..., approx_grad: onp.ToBool = 0, bounds: _Bounds | None = None, m: onp.ToJustInt = 10, factr: onp.ToFloat = 1e7, pgtol: onp.ToFloat = 1e-5, epsilon: onp.ToFloat = 1e-8, *, iprint: int, maxfun: onp.ToJustInt = 15_000, maxiter: onp.ToJustInt = 15_000, disp: int, callback: _Fn[_Ignored] | None = None, maxls: onp.ToJustInt = 20, ) -> _FMinResult: ... 

Here these are two deprecated optional parameters, lprint and disp, so additional overloads are needed for each of their combinations, i.e. {(lprint,), (disp,), (lprint, disp)} three in this case. With three deprecated parameters that’d be 3+3+1 == 7 additional overloads.

1 Like

I didn’t include this in PEP 702 for simplicity, but it’s a natural extension. There is a related discussion in the PEP at PEP 702 – Marking deprecations using the type system | peps.python.org, which suggests a Deprecated[type, message] type qualifier. That section covers deprecation of attributes, which isn’t possible at all with PEP 702. Deprecation of arguments is possible with overloads, but it can be unwieldy.

With Deprecated, your example would look something like def foo(bar: Deprecated[int | None, "bar is deprecated"] = None): .... That means a new place where strings in type expressions are actual strings, not types, which may cause difficulties for some type checker implementers.

What’s needed to move this forward is for someone to come up with a solid, concrete proposal, get a reference implementation in some type checker, and push a PEP forward on it.

5 Likes