There was an error while loading. Please reload this page.
1 parent fefe3cc commit f00d26aCopy full SHA for f00d26a
README.md
@@ -0,0 +1,17 @@
1
+# Intro
2
+
3
+PyTorch implementation of [Learning to learn by gradient descent by gradient descent](https://arxiv.org/abs/1606.04474).
4
5
+## Run
6
7
+```bash
8
+python main.py
9
+```
10
11
+### TODO
12
+- [x] Initial implementation
13
+- [x] Toy data
14
+- [x] LSTM updates
15
+- [ ] Compare with standard optimizers
16
+- [ ] Real data
17
+- [ ] More difficult models
main.py
@@ -41,8 +41,7 @@
41
# Reset lstm values of the meta optimizer
42
meta_optimizer.reset_lstm()
43
44
- x, y = get_batch(args.batch_size
45
-)
+ x, y = get_batch(args.batch_size)
46
x, y = Variable(x), Variable(y)
47
48
# Compute initial loss of the model
0 commit comments