Skip to content

Conversation

@sidgoyal78
Copy link
Contributor

@sidgoyal78 sidgoyal78 commented Nov 29, 2017

Resolves #5612 , by adding the implementation of the row-convolution operator.

Few notes:

@sidgoyal78 sidgoyal78 changed the title Add row conv operator Add row conv operator (in progress) Nov 29, 2017
@qingqing01 qingqing01 self-requested a review December 1, 2017 02:17
@sidgoyal78 sidgoyal78 changed the title Add row conv operator (in progress) Add row conv operator Dec 3, 2017
Copy link
Contributor

@qingqing01 qingqing01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sidgoyal78 I like your code, including the code of the previous PRs. In my opinion, your code is very beautiful and high quality.


$$
out_{i, :} = \sum_{j=i}^{i + context} in_{j,:} \dot W_{i-j, :}
$$
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the pointer, i have included now.

AddInput("Filter",
"(Tensor), the input(Filter) is a learnable parameter. It "
"is a 2-D tensor with shape (future_context x N), where, "
"future_context is the batch size and N is the data dimension.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like your code, I think the name future_context is good :)

future_context is the batch size

future_context is the future context length.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed, thanks.

auto *Out = context.Output<LoDTensor>("Out");

Out->mutable_data<T>(context.GetPlace());
context.ShareLoD("X", "Out");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since there is ctx->ShareLoD("X", "Out") in the InferShape, and the previous bug for ShareLoD in InferShape has been fixed, line 123 can be removed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

cot_seq(k, d) = weights(w, d) * cip_seq(k + w, d);
} else {
cot_seq(k, d) += weights(w, d) * cip_seq(k + w, d);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can use elementwise mul and col-wise sum to remove the for loop in line 145 and line 147. But the optimization can be done in the future. So in this PR, I think it is ok here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure.

void Compute(const framework::ExecutionContext &context) const override {
auto *X = context.Input<LoDTensor>("X");
auto *Filter = context.Input<Tensor>("Filter");
auto *Out = context.Output<LoDTensor>("Out");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The naming style:
X->x
Filer->filter
Out->out

https://google.github.io/styleguide/cppguide.html#Variable_Names

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I need to fix for .cu code)

Copy link
Contributor

@qingqing01 qingqing01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@sidgoyal78 sidgoyal78 merged commit 4ff6bc1 into PaddlePaddle:develop Dec 11, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

2 participants