Skip to content

Commit b3f6b5a

Browse files
authored
design of RNNOp (#3727)
* add rnn design
1 parent 47b211d commit b3f6b5a

File tree

8 files changed

+371
-0
lines changed

8 files changed

+371
-0
lines changed
Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
digraph G {
2+
3+
rnn [label="1-th level RNN" shape=box]
4+
5+
subgraph cluster0 {
6+
label = "time step 0"
7+
8+
sent0 [label="sentence"]
9+
sent1 [label="sentence"]
10+
11+
rnn1 [label="2-th level RNN" shape=box]
12+
13+
sent0 -> rnn1
14+
sent1 -> rnn1
15+
}
16+
17+
subgraph cluster1 {
18+
label = "time step 1"
19+
20+
sent2 [label="sentence"]
21+
sent3 [label="sentence"]
22+
23+
rnn2 [label="2-th level RNN" shape=box]
24+
25+
sent2 -> rnn2
26+
sent3 -> rnn2
27+
}
28+
29+
subgraph cluster2 {
30+
label = "time step 2"
31+
32+
sent4 [label="sentence"]
33+
sent5 [label="sentence"]
34+
35+
rnn3 [label="2-th level RNN" shape=box]
36+
37+
sent4 -> rnn3
38+
sent5 -> rnn3
39+
}
40+
41+
42+
para0 [label="paragraph info 0"]
43+
para1 [label="paragraph info 1"]
44+
para2 [label="paragraph info 2"]
45+
46+
rnn1 -> para0
47+
rnn2 -> para1
48+
rnn3 -> para2
49+
50+
para0 -> rnn
51+
para1 -> rnn
52+
para2 -> rnn
53+
54+
chapter [label="chapter info"]
55+
rnn -> chapter
56+
}
51.4 KB
Loading

doc/design/ops/images/rnn.dot

Lines changed: 87 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,87 @@
1+
digraph G {
2+
label = "simple RNN implementation"
3+
4+
ranksep=2;
5+
6+
//graph [nodesep=1, ranksep=1];
7+
8+
node[nodesep=1]
9+
10+
subgraph cluster0 {
11+
label = "global scope"
12+
rankdir = TB
13+
W
14+
boot_memory
15+
input
16+
output
17+
}
18+
19+
subgraph cluster1 {
20+
label = "step-scope 0"
21+
rankdir = TB
22+
memory0[label="memory"]
23+
prememory0[label="pre-memory"]
24+
step_input0[label="step input"]
25+
step_output0[label="step output"]
26+
}
27+
28+
subgraph cluster2 {
29+
label = "step-scope 1"
30+
rankdir = TB
31+
memory1[label="memory"]
32+
prememory1[label="pre-memory"]
33+
step_input1[label="step input"]
34+
step_output1[label="step output"]
35+
}
36+
37+
subgraph cluster3 {
38+
label = "step-scope 2"
39+
rankdir = TB
40+
memory2[label="memory"]
41+
prememory2[label="pre-memory"]
42+
step_input2[label="step input"]
43+
step_output2[label="step output"]
44+
}
45+
46+
stepnet [shape=box]
47+
stepnet0 [shape=box, style=dashed]
48+
stepnet1 [shape=box, style=dashed]
49+
stepnet2 [shape=box, style=dashed]
50+
51+
52+
edge[color=blue]
53+
boot_memory -> prememory0 [label="init" color="blue"]
54+
memory0 -> prememory1 [label="copy/reference" color="blue"]
55+
memory1 -> prememory2 [label="copy/reference" color="blue"]
56+
57+
edge[color=black]
58+
W -> stepnet0[constraint=false, style=dashed]
59+
W -> stepnet1[constraint=false, style=dashed]
60+
W -> stepnet2[constraint=false, style=dashed]
61+
62+
memory0 -> stepnet0[style=dashed]
63+
prememory0 -> stepnet0 -> step_output0[style=dashed]
64+
65+
memory1 -> stepnet1[style=dashed]
66+
prememory1 -> stepnet1 -> step_output1[style=dashed]
67+
68+
memory2 -> stepnet2[style=dashed]
69+
prememory2 -> stepnet2 -> step_output2[style=dashed]
70+
71+
input -> step_input0
72+
input -> step_input1
73+
input -> step_input2
74+
75+
step_input0 -> stepnet0 [style=dashed]
76+
step_input1 -> stepnet1[style=dashed]
77+
step_input2 -> stepnet2[style=dashed]
78+
79+
step_output0 -> output
80+
step_output1 -> output
81+
step_output2 -> output
82+
83+
stepnet0 -> stepnet[style=dashed]
84+
stepnet1 -> stepnet[style=dashed]
85+
stepnet2 -> stepnet[style=dashed]
86+
87+
}

doc/design/ops/images/rnn.jpg

43.3 KB
Loading

doc/design/ops/images/rnn.png

181 KB
Loading
Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
digraph G {
2+
chapter [label="chapter"]
3+
4+
subgraph cluster0 {
5+
label = "paragraph 0"
6+
7+
top_rnn0[label="top rnn step 0" shape=box]
8+
9+
p0 [label="paragraph 0"]
10+
p1 [label="paragraph 1"]
11+
}
12+
13+
subgraph cluster1{
14+
label = "paragraph 1"
15+
16+
top_rnn1[label="top rnn step 1" shape=box]
17+
18+
p2 [label="paragraph 0"]
19+
p3 [label="paragraph 1"]
20+
}
21+
22+
subgraph cluster_p0 {
23+
label = "sentence 0"
24+
25+
low_rnn0 [label="low rnn step 0" shape=box]
26+
s00 [label="sentence 0"]
27+
s01 [label="sentence 1"]
28+
29+
low_rnn0 -> s00
30+
low_rnn0 -> s01
31+
}
32+
33+
subgraph cluster_p1 {
34+
label = "sentence 1"
35+
low_rnn1 [label="low rnn step 1" shape=box]
36+
s10 [label="sentence 0"]
37+
s11 [label="sentence 1"]
38+
low_rnn1 -> s10
39+
low_rnn1 -> s11
40+
}
41+
42+
subgraph cluster_p2 {
43+
label = "sentence 1"
44+
low_rnn2 [label="low rnn step 0" shape=box]
45+
s20 [label="sentence 0"]
46+
s21 [label="sentence 1"]
47+
low_rnn2 -> s20
48+
low_rnn2 -> s21
49+
}
50+
51+
subgraph cluster_p3 {
52+
label = "sentence 1"
53+
low_rnn3 [label="low rnn step 1" shape=box]
54+
s30 [label="sentence 0"]
55+
s31 [label="sentence 1"]
56+
low_rnn3 -> s30
57+
low_rnn3 -> s31
58+
}
59+
60+
61+
chapter -> top_rnn0
62+
chapter -> top_rnn1
63+
64+
top_rnn0 -> p0
65+
top_rnn0 -> p1
66+
top_rnn1 -> p2
67+
top_rnn1 -> p3
68+
69+
70+
p0 -> low_rnn0
71+
p1 -> low_rnn1
72+
p2 -> low_rnn2
73+
p3 -> low_rnn3
74+
75+
}
67.3 KB
Loading

doc/design/ops/rnn.md

Lines changed: 153 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,153 @@
1+
# RNNOp design
2+
3+
This document is about an RNN operator which requires that instances in a mini-batch have the same length. We will have a more flexible RNN operator.
4+
5+
## RNN Algorithm Implementation
6+
7+
<p aligh="center">
8+
<img src="./images/rnn.jpg"/>
9+
</p>
10+
11+
The above diagram shows an RNN unrolled into a full network.
12+
13+
There are several important concepts:
14+
15+
- *step-net*: the sub-graph to run at each step,
16+
- *memory*, $h_t$, the state of the current step,
17+
- *ex-memory*, $h_{t-1}$, the state of the previous step,
18+
- *initial memory value*, the ex-memory of the first step.
19+
20+
### Step-scope
21+
22+
There could be local variables defined in step-nets. PaddlePaddle runtime realizes these variables in *step-scopes* -- scopes created for each step.
23+
24+
<p aligh="center">
25+
<img src="./images/rnn.png"/><br/>
26+
Figure 2 the RNN's data flow
27+
</p>
28+
29+
Please be aware that all steps run the same step-net. Each step
30+
31+
1. creates the step-scope,
32+
2. realizes local variables, including step-outputs, in the step-scope, and
33+
3. runs the step-net, which could use these variables.
34+
35+
The RNN operator will compose its output from step outputs in step scopes.
36+
37+
### Memory and Ex-memory
38+
39+
Let's give more details about memory and ex-memory via a simply example:
40+
41+
$$
42+
h_t = U h_{t-1} + W x_t
43+
$$,
44+
45+
where $h_t$ and $h_{t-1}$ are the memory and ex-memory of step $t$'s respectively.
46+
47+
In the implementation, we can make an ex-memory variable either "refers to" the memory variable of the previous step,
48+
or copy the value of the previous memory value to the current ex-memory variable.
49+
50+
### Usage in Python
51+
52+
For more information on Block, please refer to the [design doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/block.md).
53+
54+
We can define an RNN's step-net using Block:
55+
56+
```python
57+
import paddle as pd
58+
59+
X = some_op() # x is some operator's output, and is a LoDTensor
60+
a = some_op()
61+
62+
# declare parameters
63+
W = pd.Variable(shape=[20, 30])
64+
U = pd.Variable(shape=[20, 30])
65+
66+
rnn = pd.create_rnn_op(output_num=1)
67+
with rnn.stepnet():
68+
x = rnn.add_input(X)
69+
# declare a memory (rnn's step)
70+
h = rnn.add_memory(init=a)
71+
# h.pre_state() means previous memory of rnn
72+
new_state = pd.add_two( pd.matmul(W, x) + pd.matmul(U, h.pre_state()))
73+
# update current memory
74+
h.update(new_state)
75+
# indicate that h variables in all step scopes should be merged
76+
rnn.add_outputs(h)
77+
78+
out = rnn()
79+
```
80+
81+
Python API functions in above example:
82+
83+
- `rnn.add_input` indicates the parameter is a variable that will be segmented into step-inputs.
84+
- `rnn.add_memory` creates a variable used as the memory.
85+
- `rnn.add_outputs` mark the variables that will be concatenated across steps into the RNN output.
86+
87+
### Nested RNN and LoDTensor
88+
89+
An RNN whose step-net includes other RNN operators is known as an *nested RNN*.
90+
91+
For example, we could have a 2-level RNN, where the top level corresponds to paragraphs, and the lower level corresponds to sentences.
92+
93+
The following figure illustrates the feeding of text into the lower level, one sentence each step, and the feeding of step outputs to the top level. The final top level output is about the whole text.
94+
95+
<p aligh="center">
96+
<img src="./images/2_level_rnn.png"/>
97+
</p>
98+
99+
```python
100+
import paddle as pd
101+
102+
W = pd.Variable(shape=[20, 30])
103+
U = pd.Variable(shape=[20, 30])
104+
105+
W0 = pd.Variable(shape=[20, 30])
106+
U0 = pd.Variable(shape=[20, 30])
107+
108+
# a is output of some op
109+
a = some_op()
110+
111+
# chapter_data is a set of 128-dim word vectors
112+
# the first level of LoD is sentence
113+
# the second level of LoD is chapter
114+
chapter_data = pd.Variable(shape=[None, 128], type=pd.lod_tensor, level=2)
115+
116+
def lower_level_rnn(paragraph):
117+
'''
118+
x: the input
119+
'''
120+
rnn = pd.create_rnn_op(output_num=1)
121+
with rnn.stepnet():
122+
sentence = rnn.add_input(paragraph, level=0)
123+
h = rnn.add_memory(shape=[20, 30])
124+
h.update(
125+
pd.matmul(W, sentence) + pd.matmul(U, h.pre_state()))
126+
# get the last state as sentence's info
127+
rnn.add_outputs(h)
128+
return rnn
129+
130+
top_level_rnn = pd.create_rnn_op(output_num=1)
131+
with top_level_rnn.stepnet():
132+
paragraph_data = rnn.add_input(chapter_data, level=1)
133+
low_rnn = lower_level_rnn(paragraph_data)
134+
paragraph_out = low_rnn()
135+
136+
h = rnn.add_memory(init=a)
137+
h.update(
138+
pd.matmul(W0, paragraph_data) + pd.matmul(U0, h.pre_state()))
139+
top_level_rnn.add_outputs(h)
140+
141+
# just output the last step
142+
chapter_out = top_level_rnn(output_all_steps=False)
143+
```
144+
145+
in above example, the construction of the `top_level_rnn` calls `lower_level_rnn`. The input is a LoD Tensor. The top level RNN segments input text data into paragraphs, and the lower level RNN segments each paragraph into sentences.
146+
147+
By default, the `RNNOp` will concatenate the outputs from all the time steps,
148+
if the `output_all_steps` set to False, it will only output the final time step.
149+
150+
151+
<p align="center">
152+
<img src="images/rnn_2level_data.png"/>
153+
</p>

0 commit comments

Comments
 (0)