Skip to content

Conversation

rwightman
Copy link
Collaborator

The branch associated with this PR is for development of timm bits -- a significant update and reorganizaiton of timm's training scripts and associated modules. It works w/ TPUs (PyTorch XLA) and GPUs (PyTorch) and possibly DeepSpeed.

This PR is still a long ways from being merged. Having the branch as a PR helps w/ visibility of the ongoing work.

rwightman and others added 30 commits April 20, 2021 17:15
…step closure used, metrics base impl w/ distributed reduce, many tweaks/fixes.
…h XLA usage on TPU-VM. Add some FIXMEs and fold train_cfg into train_state by default.
… XLA (pushing into transforms), revamp of transform/preproc config, etc ongoing...
rwightman and others added 29 commits February 28, 2022 16:33
…ers more similar. Fix workers=0 compatibility. Add ImageNet22k/12k synset defs.
Add support for different TFDS `BuilderConfig`s
Fix issue with `torchvision`'s `ImageNet`
@Smartappli
Copy link

is this still in progress ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
3 participants