In Introductory Lectures in Convex Optimization by Yurii Nesterov, Section 1.2.3 shows that gradient descent is guaranteed to converge if the step size is chosen either with a fixed step size or chosen by the Goldstein-Armijo rule.
Do you know if there exists conditions on a variable step size (upper/lower bounds?) which guarantees the convergence of gradient descent? (The conditions need not guarantee convergence to a local minimum, just to a stationary point). Thank you!