Skip to content

Conversation

@rwightman
Copy link
Collaborator

No description provided.

* Add MADGRAD code * Fix Lamb (non-fused variant) to work w/ PyTorch XLA * Tweak optimizer factory args (lr/learning_rate and opt/optimizer_name), may break compat * Use newer fn signatures for all add,addcdiv, addcmul in optimizers * Use upcoming PyTorch native Nadam if it's available * Cleanup lookahead opt * Add optimizer tests * Remove novograd.py impl as it was messy, keep nvnovograd * Make AdamP/SGDP work in channels_last layout * Add rectified adablief mode (radabelief) * Support a few more PyTorch optim, adamax, adagrad
@rwightman rwightman merged commit 4d28401 into master Aug 18, 2021
@rwightman rwightman deleted the opt_cleanup branch November 19, 2021 16:47
guoriyue pushed a commit to guoriyue/pytorch-image-models that referenced this pull request May 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

2 participants