Skip to content

Commit 55db877

Browse files
aselletensorflower-gardener
authored andcommitted
Documentation changes to adhere to new doc generator
Change: 147382677
1 parent 86a83cf commit 55db877

File tree

7 files changed

+42
-280
lines changed

7 files changed

+42
-280
lines changed

tensorflow/contrib/linalg/__init__.py

Lines changed: 1 addition & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -12,34 +12,16 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414
# ==============================================================================
15-
"""Linear algebra libraries for TensorFlow.
16-
17-
## `LinearOperator`
18-
19-
Subclasses of `LinearOperator` provide a access to common methods on a
20-
(batch) matrix, without the need to materialize the matrix. This allows:
21-
22-
* Matrix free computations
23-
* Different operators to take advantage of special strcture, while providing a
24-
consistent API to users.
25-
26-
### Base class
15+
"""Linear algebra libraries. See the @{$python/contrib.linalg} guide.
2716
2817
@@LinearOperator
29-
30-
### Individual operators
31-
3218
@@LinearOperatorDiag
3319
@@LinearOperatorIdentity
3420
@@LinearOperatorScaledIdentity
3521
@@LinearOperatorMatrix
3622
@@LinearOperatorTriL
3723
@@LinearOperatorUDVHUpdate
38-
39-
### Transformations and Combinations of operators
40-
4124
@@LinearOperatorComposition
42-
4325
"""
4426
from __future__ import absolute_import
4527
from __future__ import division

tensorflow/contrib/losses/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
# limitations under the License.
1414
# ==============================================================================
1515

16-
"""Ops for building neural network losses."""
16+
"""Ops for building neural network losses. See @{$python/contrib.losses}."""
1717

1818
from __future__ import absolute_import
1919
from __future__ import division

tensorflow/contrib/metrics/__init__.py

Lines changed: 2 additions & 90 deletions
Original file line numberDiff line numberDiff line change
@@ -12,90 +12,9 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414
# ==============================================================================
15-
"""##Ops for evaluation metrics and summary statistics.
15+
"""Ops for evaluation metrics and summary statistics.
1616
17-
### API
18-
19-
This module provides functions for computing streaming metrics: metrics computed
20-
on dynamically valued `Tensors`. Each metric declaration returns a
21-
"value_tensor", an idempotent operation that returns the current value of the
22-
metric, and an "update_op", an operation that accumulates the information
23-
from the current value of the `Tensors` being measured as well as returns the
24-
value of the "value_tensor".
25-
26-
To use any of these metrics, one need only declare the metric, call `update_op`
27-
repeatedly to accumulate data over the desired number of `Tensor` values (often
28-
each one is a single batch) and finally evaluate the value_tensor. For example,
29-
to use the `streaming_mean`:
30-
31-
```python
32-
value = ...
33-
mean_value, update_op = tf.contrib.metrics.streaming_mean(values)
34-
sess.run(tf.local_variables_initializer())
35-
36-
for i in range(number_of_batches):
37-
print('Mean after batch %d: %f' % (i, update_op.eval())
38-
print('Final Mean: %f' % mean_value.eval())
39-
```
40-
41-
Each metric function adds nodes to the graph that hold the state necessary to
42-
compute the value of the metric as well as a set of operations that actually
43-
perform the computation. Every metric evaluation is composed of three steps
44-
45-
* Initialization: initializing the metric state.
46-
* Aggregation: updating the values of the metric state.
47-
* Finalization: computing the final metric value.
48-
49-
In the above example, calling streaming_mean creates a pair of state variables
50-
that will contain (1) the running sum and (2) the count of the number of samples
51-
in the sum. Because the streaming metrics use local variables,
52-
the Initialization stage is performed by running the op returned
53-
by `tf.local_variables_initializer()`. It sets the sum and count variables to
54-
zero.
55-
56-
Next, Aggregation is performed by examining the current state of `values`
57-
and incrementing the state variables appropriately. This step is executed by
58-
running the `update_op` returned by the metric.
59-
60-
Finally, finalization is performed by evaluating the "value_tensor"
61-
62-
In practice, we commonly want to evaluate across many batches and multiple
63-
metrics. To do so, we need only run the metric computation operations multiple
64-
times:
65-
66-
```python
67-
labels = ...
68-
predictions = ...
69-
accuracy, update_op_acc = tf.contrib.metrics.streaming_accuracy(
70-
labels, predictions)
71-
error, update_op_error = tf.contrib.metrics.streaming_mean_absolute_error(
72-
labels, predictions)
73-
74-
sess.run(tf.local_variables_initializer())
75-
for batch in range(num_batches):
76-
sess.run([update_op_acc, update_op_error])
77-
78-
accuracy, mean_absolute_error = sess.run([accuracy, mean_absolute_error])
79-
```
80-
81-
Note that when evaluating the same metric multiple times on different inputs,
82-
one must specify the scope of each metric to avoid accumulating the results
83-
together:
84-
85-
```python
86-
labels = ...
87-
predictions0 = ...
88-
predictions1 = ...
89-
90-
accuracy0 = tf.contrib.metrics.accuracy(labels, predictions0, name='preds0')
91-
accuracy1 = tf.contrib.metrics.accuracy(labels, predictions1, name='preds1')
92-
```
93-
94-
Certain metrics, such as streaming_mean or streaming_accuracy, can be weighted
95-
via a `weights` argument. The `weights` tensor must be the same size as the
96-
labels and predictions tensors and results in a weighted average of the metric.
97-
98-
## Metric `Ops`
17+
See the @{$python/contrib.metrics} guide.
9918
10019
@@streaming_accuracy
10120
@@streaming_mean
@@ -130,18 +49,11 @@
13049
@@streaming_true_negatives_at_thresholds
13150
@@streaming_true_positives
13251
@@streaming_true_positives_at_thresholds
133-
13452
@@auc_using_histogram
135-
13653
@@accuracy
137-
13854
@@aggregate_metrics
13955
@@aggregate_metric_map
140-
14156
@@confusion_matrix
142-
143-
## Set `Ops`
144-
14557
@@set_difference
14658
@@set_intersection
14759
@@set_size

tensorflow/contrib/opt/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414
# ==============================================================================
15-
"""opt: A module containing optimization routines."""
15+
"""A module containing optimization routines."""
1616

1717
from __future__ import absolute_import
1818
from __future__ import division

tensorflow/contrib/opt/python/training/moving_average_optimizer.py

Lines changed: 35 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -12,39 +12,8 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414
# ==============================================================================
15-
"""Optimizer that computes a moving average of the variables.
16-
17-
Empirically it has been found that using the moving average of the trained
18-
parameters of a deep network is better than using its trained parameters
19-
directly. This optimizer allows you to compute this moving average and swap the
20-
variables at save time so that any code outside of the training loop will use by
21-
default the averaged values instead of the original ones.
22-
23-
Example of usage:
24-
25-
```python
26-
27-
// Encapsulate your favorite optimizer (here the momentum one)
28-
// inside the MovingAverageOptimizer.
29-
opt = tf.train.MomentumOptimizer(learning_rate, FLAGS.momentum)
30-
opt = tf.contrib.opt.MovingAverageOptimizer(opt)
31-
// Then create your model and all its variables.
32-
model = build_model()
33-
// Add the training op that optimizes using opt.
34-
// This needs to be called before swapping_saver().
35-
opt.minimize(cost, var_list)
36-
// Then create your saver like this:
37-
saver = opt.swapping_saver()
38-
// Pass it to your training loop.
39-
slim.learning.train(
40-
model,
41-
...
42-
saver=saver)
43-
```
44-
45-
Note that for evaluation, the normal saver should be used instead of
46-
swapping_saver().
47-
"""
15+
"""Moving average optimizer."""
16+
4817
from __future__ import absolute_import
4918
from __future__ import division
5019
from __future__ import print_function
@@ -60,7 +29,39 @@
6029

6130

6231
class MovingAverageOptimizer(optimizer.Optimizer):
63-
"""Optimizer wrapper that maintains a moving average of parameters."""
32+
"""Optimizer that computes a moving average of the variables.
33+
34+
Empirically it has been found that using the moving average of the trained
35+
parameters of a deep network is better than using its trained parameters
36+
directly. This optimizer allows you to compute this moving average and swap
37+
the variables at save time so that any code outside of the training loop will
38+
use by default the averaged values instead of the original ones.
39+
40+
Example of usage:
41+
42+
```python
43+
44+
// Encapsulate your favorite optimizer (here the momentum one)
45+
// inside the MovingAverageOptimizer.
46+
opt = tf.train.MomentumOptimizer(learning_rate, FLAGS.momentum)
47+
opt = tf.contrib.opt.MovingAverageOptimizer(opt)
48+
// Then create your model and all its variables.
49+
model = build_model()
50+
// Add the training op that optimizes using opt.
51+
// This needs to be called before swapping_saver().
52+
opt.minimize(cost, var_list)
53+
// Then create your saver like this:
54+
saver = opt.swapping_saver()
55+
// Pass it to your training loop.
56+
slim.learning.train(
57+
model,
58+
...
59+
saver=saver)
60+
```
61+
62+
Note that for evaluation, the normal saver should be used instead of
63+
swapping_saver().
64+
"""
6465

6566
def __init__(self, opt, average_decay=0.9999, num_updates=None,
6667
sequential_update=True):

tensorflow/contrib/rnn/__init__.py

Lines changed: 1 addition & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -12,26 +12,15 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414
# ==============================================================================
15-
"""Module for constructing RNN Cells and additional RNN operations.
16-
17-
## Base interface for all RNN Cells
15+
"""RNN Cells and additional RNN operations. See @{$python/contrib.rnn} guide.
1816
1917
@@RNNCell
20-
21-
## Core RNN Cells for use with TensorFlow's core RNN methods
22-
2318
@@BasicRNNCell
2419
@@BasicLSTMCell
2520
@@GRUCell
2621
@@LSTMCell
2722
@@LayerNormBasicLSTMCell
28-
29-
## Classes storing split `RNNCell` state
30-
3123
@@LSTMStateTuple
32-
33-
## Core RNN Cell wrappers (RNNCells that wrap other RNNCells)
34-
3524
@@MultiRNNCell
3625
@@LSTMBlockWrapper
3726
@@DropoutWrapper
@@ -40,32 +29,17 @@
4029
@@OutputProjectionWrapper
4130
@@DeviceWrapper
4231
@@ResidualWrapper
43-
44-
### Block RNNCells
4532
@@LSTMBlockCell
4633
@@GRUBlockCell
47-
48-
### Fused RNNCells
4934
@@FusedRNNCell
5035
@@FusedRNNCellAdaptor
5136
@@TimeReversedFusedRNN
5237
@@LSTMBlockFusedCell
53-
54-
### LSTM-like cells
5538
@@CoupledInputForgetGateLSTMCell
5639
@@TimeFreqLSTMCell
5740
@@GridLSTMCell
58-
59-
### RNNCell wrappers
6041
@@AttentionCellWrapper
6142
@@CompiledWrapper
62-
63-
64-
## Recurrent Neural Networks
65-
66-
TensorFlow provides a number of methods for constructing Recurrent Neural
67-
Networks.
68-
6943
@@static_rnn
7044
@@static_state_saving_rnn
7145
@@static_bidirectional_rnn

0 commit comments

Comments
 (0)