Skip to content

Commit 751d76d

Browse files
author
Moshe Looks
committed
point paper links to arxiv
1 parent 8d8703f commit 751d76d

File tree

4 files changed

+4
-4
lines changed

4 files changed

+4
-4
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ the input data. For example, [this model](tensorflow_fold/g3doc/sentiment.ipynb)
77
implements [TreeLSTMs](https://arxiv.org/abs/1503.00075) for sentiment analysis
88
on parse trees of arbitrary shape/size/depth.
99

10-
Fold implements [*dynamic batching*](https://openreview.net/pdf?id=ryrGawqex).
10+
Fold implements [*dynamic batching*](https://arxiv.org/abs/1702.02181).
1111
Batches of arbitrarily shaped computation graphs are transformed to produce a
1212
static computation graph. This graph has the same structure regardless of what
1313
input it receives, and can be executed efficiently by TensorFlow.

tensorflow_fold/g3doc/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ typically used to run models like tree-RNNs. When the input consists of trees
2121
size and shape. A standard TensorFlow model consists of a fixed graph of
2222
operations, which cannot accommodate variable-shaped data. Fold overcomes this
2323
limitation by using
24-
the [dynamic batching algorithm](https://openreview.net/pdf?id=ryrGawqex).
24+
the [dynamic batching algorithm](https://arxiv.org/abs/1702.02181).
2525

2626
Fold consists of a high-level API called Blocks, and a low-level API called
2727
Loom. Blocks are pure Python, whereas Loom is a mixture of Python and

tensorflow_fold/g3doc/quick.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -499,7 +499,7 @@
499499
"source": [
500500
"The `reduce_net_block` function creates a block (`net_block` that contains a two-layer fully connected (FC) network that takes a pair of scalar tensors as input and produces a scalar tensor as output. This network gets applied in a binary tree to reduce a sequence of scalar tensors to a single scalar tensor.\n",
501501
"\n",
502-
"One thing to notice here is that we are calling [`tf.squeeze`](https://www.tensorflow.org/versions/r1.0/api_docs/python/array_ops/shapes_and_shaping#squeeze) with `axis=1`, even though the Fold output type of `td.FC(1, activation=None)` (and hence the input type of the enclosing `Function` block) is a `TensorType` with shape `(1)`). This is because all Fold blocks actually run on TF tensors with an implicit leading batch dimension, which enables execution via [*dynamic batching*](https://openreview.net/pdf?id=ryrGawqex). It is important to bear this in mind when creating `Function` blocks that wrap functions that are not applied elementwise."
502+
"One thing to notice here is that we are calling [`tf.squeeze`](https://www.tensorflow.org/versions/r1.0/api_docs/python/array_ops/shapes_and_shaping#squeeze) with `axis=1`, even though the Fold output type of `td.FC(1, activation=None)` (and hence the input type of the enclosing `Function` block) is a `TensorType` with shape `(1)`). This is because all Fold blocks actually run on TF tensors with an implicit leading batch dimension, which enables execution via [*dynamic batching*](https://arxiv.org/abs/1702.02181). It is important to bear this in mind when creating `Function` blocks that wrap functions that are not applied elementwise."
503503
]
504504
},
505505
{

tensorflow_fold/g3doc/sentiment.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1160,7 +1160,7 @@
11601160
"cell_type": "markdown",
11611161
"metadata": {},
11621162
"source": [
1163-
"Not bad! See section 3.5.1 of [our paper](https://openreview.net/pdf?id=ryrGawqex) for discussion and a comparison of these results to the state of the art."
1163+
"Not bad! See section 3.5.1 of [our paper](https://arxiv.org/abs/1702.02181) for discussion and a comparison of these results to the state of the art."
11641164
]
11651165
}
11661166
],

0 commit comments

Comments
 (0)