Skip to content

Commit d952729

Browse files
arjungtensorflow-copybara
authored andcommitted
Fix typo in IMDB tutorial
PiperOrigin-RevId: 268329711
1 parent f1745f2 commit d952729

File tree

1 file changed

+23
-23
lines changed

1 file changed

+23
-23
lines changed

g3doc/tutorials/graph_keras_lstm_imdb.ipynb

Lines changed: 23 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -43,17 +43,17 @@
4343
"source": [
4444
"# Graph regularization for sentiment classification using synthesized graphs\n",
4545
"\n",
46-
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
47-
" <td>\n",
48-
" <a target=\"_blank\" href=\"https://www.tensorflow.org/neural_structured_learning/tutorials/graph_keras_lstm_imdb\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n",
49-
" </td>\n",
50-
" <td>\n",
51-
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_lstm_imdb.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
52-
" </td>\n",
53-
" <td>\n",
54-
" <a target=\"_blank\" href=\"https://github.com/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_lstm_imdb.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n",
55-
" </td>\n",
56-
"</table>"
46+
"\u003ctable class=\"tfo-notebook-buttons\" align=\"left\"\u003e\n",
47+
" \u003ctd\u003e\n",
48+
" \u003ca target=\"_blank\" href=\"https://www.tensorflow.org/neural_structured_learning/tutorials/graph_keras_lstm_imdb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" /\u003eView on TensorFlow.org\u003c/a\u003e\n",
49+
" \u003c/td\u003e\n",
50+
" \u003ctd\u003e\n",
51+
" \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_lstm_imdb.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /\u003eRun in Google Colab\u003c/a\u003e\n",
52+
" \u003c/td\u003e\n",
53+
" \u003ctd\u003e\n",
54+
" \u003ca target=\"_blank\" href=\"https://github.com/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_lstm_imdb.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /\u003eView source on GitHub\u003c/a\u003e\n",
55+
" \u003c/td\u003e\n",
56+
"\u003c/table\u003e"
5757
]
5858
},
5959
{
@@ -344,10 +344,10 @@
344344
"\n",
345345
" # The first indices are reserved\n",
346346
" word_index = {k: (v + 3) for k, v in word_index.items()}\n",
347-
" word_index['<PAD>'] = 0\n",
348-
" word_index['<START>'] = 1\n",
349-
" word_index['<UNK>'] = 2 # unknown\n",
350-
" word_index['<UNUSED>'] = 3\n",
347+
" word_index['\u003cPAD\u003e'] = 0\n",
348+
" word_index['\u003cSTART\u003e'] = 1\n",
349+
" word_index['\u003cUNK\u003e'] = 2 # unknown\n",
350+
" word_index['\u003cUNUSED\u003e'] = 3\n",
351351
" return dict((value, key) for (key, value) in word_index.items())\n",
352352
"\n",
353353
"reverse_word_index = build_reverse_word_index()\n",
@@ -569,9 +569,9 @@
569569
"source": [
570570
"## Sample features\n",
571571
"\n",
572-
"We create sample features for our problem in the `tf.train.Example`s format and\n",
573-
"persist them in the `TFRecord` format. Each sample will include the following\n",
574-
"three features:\n",
572+
"We create sample features for our problem using the `tf.train.Example` format\n",
573+
"and persist them in the `TFRecord` format. Each sample will include the\n",
574+
"following three features:\n",
575575
"\n",
576576
"1. **id**: The node ID of the sample.\n",
577577
"2. **words**: An int64 list containing word IDs.\n",
@@ -774,11 +774,11 @@
774774
"source": [
775775
"### Prepare the data\n",
776776
"\n",
777-
"The reviews\u2014the arrays of integers\u2014must be converted to tensors before being fed\n",
777+
"The reviews—the arrays of integers—must be converted to tensors before being fed\n",
778778
"into the neural network. This conversion can be done a couple of ways:\n",
779779
"\n",
780780
"* Convert the arrays into vectors of `0`s and `1`s indicating word occurrence,\n",
781-
" similar to a one-hot encoding. For example, the sequence `[3, 5]` would become a `10000`-dimensional vector that is all zeros except for indices `3` and `5`, which are ones. Then, make this the first layer in our network\u2014a `Dense` layer\u2014that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.\n",
781+
" similar to a one-hot encoding. For example, the sequence `[3, 5]` would become a `10000`-dimensional vector that is all zeros except for indices `3` and `5`, which are ones. Then, make this the first layer in our network—a `Dense` layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.\n",
782782
"\n",
783783
"* Alternatively, we can pad the arrays so they all have the same length, then\n",
784784
" create an integer tensor of shape `max_length * num_reviews`. We can use an\n",
@@ -885,7 +885,7 @@
885885
"source": [
886886
"### Build the model\n",
887887
"\n",
888-
"A neural network is created by stacking layers\u2014this requires two main architectural decisions:\n",
888+
"A neural network is created by stacking layers—this requires two main architectural decisions:\n",
889889
"\n",
890890
"* How many layers to use in the model?\n",
891891
"* How many *hidden units* to use for each layer?\n",
@@ -982,7 +982,7 @@
982982
"If a model has more hidden units (a higher-dimensional representation space),\n",
983983
"and/or more layers, then the network can learn more complex representations.\n",
984984
"However, it makes the network more computationally expensive and may lead to\n",
985-
"learning unwanted patterns\u2014patterns that improve performance on training data\n",
985+
"learning unwanted patterns—patterns that improve performance on training data\n",
986986
"but not on the test data. This is called *overfitting*."
987987
]
988988
},
@@ -1204,7 +1204,7 @@
12041204
"source": [
12051205
"Notice the training loss *decreases* with each epoch and the training accuracy\n",
12061206
"*increases* with each epoch. This is expected when using a gradient descent\n",
1207-
"optimization\u2014it should minimize the desired quantity on every iteration."
1207+
"optimization—it should minimize the desired quantity on every iteration."
12081208
]
12091209
},
12101210
{

0 commit comments

Comments
 (0)