tensor2tensor
Deep-Learning-Papers-Reading-Roadmap
| tensor2tensor | Deep-Learning-Papers-Reading-Roadmap | |
|---|---|---|
| 8 | 5 | |
| 13,873 | 39,371 | |
| - | 0.3% | |
| 6.2 | 0.0 | |
| over 2 years ago | about 3 years ago | |
| Python | Python | |
| Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tensor2tensor
- Understand how transformers work by demystifying all the math behind them
PE(1, 3) = cos(1 / 10000^(2*1 / 4)) = cos(1 / 10000^.5) ≈ 1
I also wondered if these formulae were devised with 1-based indexing in mind (though I guess for larger dimensions it doesn't make much difference), as the paper states
> The wavelengths form a geometric progression from 2π to 10000 · 2π
That led me to this chain of PRs - https://github.com/tensorflow/tensor2tensor/pull/177 - turns out the original code was actually quite different to that stated in the paper. I guess slight variations in how you calculate this embedding doesn't affect things too much?
- [P] Why the Original Transformer Figure Is Wrong, And Some Other Interesting Tidbits
The code we used to train and evaluate our models is available at https://github.com/tensorflow/tensor2tensor.
- Why the Original Transformer LLM Figure Is Wrong, and Other Interesting Tidbits
- What Are Transformer Models and How Do They Work?
The visualisation here may be helpful.
https://github.com/tensorflow/tensor2tensor/issues/1591
- [P] Why I quit my lucrative job at Google to start Vectara? (neural search as a service for developers everywhere).
Found relevant code at https://github.com/tensorflow/tensor2tensor + all code implementations here
- [D] Resources for Understanding The Original Transformer Paper
Code for https://arxiv.org/abs/1706.03762 found: https://github.com/tensorflow/tensor2tensor
- Alias-Free GAN
Deep-Learning-Papers-Reading-Roadmap
- [D] Resources for Understanding The Original Transformer Paper
https://github.com/floodsung/Deep-Learning-Papers-Reading-Roadmap - This one is a bit dated so it doesn’t contain all of the papers that you need to read to get up to date but I think you should definitely read all of the papers in this list and implement as much as you can.
- 4 ML Roadmaps to Help You Find Useful Resources to Learn From
Deep Learning Papers Reading Roadmap
- Should I implement every famous DL paper? [D]
I found a really great list of introductory and popular dl papers (github.com/floodsung/Deep-Learning-Papers-Reading-Roadmap) and I would absolutely implement every paper on this list if I had the time (at least a mini version e.g. CIFAR10 instead of ImageNet). Is is essential for me to implement every single paper on that list to become a good DL researcher and to start reading/implementing more recent ones? All the papers on the list are from before 2017 and I can't wait to start exploring the latest research! Would I be able to get away with just implementing a handful of papers from that list?
- [D] How did you implement papers with models that required a lot of GPUs to train?
I'm self-learning ML and trying to implement the papers listed here but I don't have access to hundreds of free GPUs like those corpos do.
- Looking for Beginner CV Resources
Definitely check out this list https://github.com/floodsung/Deep-Learning-Papers-Reading-Roadmap It's all papers, you should get used to reading scientific material.
What are some alternatives?
pytorch-seq2seq - Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Seq2seq-PyTorch
alias-free-gan - Alias-Free GAN project website and code
OpenNMT-py - Open Source Neural Machine Translation and (Large) Language Models in PyTorch
faceswap - Deepfakes Software For All