Skip to content

Conversation

@putcn
Copy link
Contributor

@putcn putcn commented Mar 15, 2018

fix: #8911

@putcn putcn requested review from abhinavarora and luotao1 March 15, 2018 18:58
abhinavarora
abhinavarora previously approved these changes Mar 15, 2018
Copy link
Contributor

@abhinavarora abhinavarora left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Just a minor typo.

- Parameter server: every parameter server stores part of the whole neural network model data. They will do optimization calculations when gradients are uploaded from trainers, and then send updated parameters to trainers.

PaddlePaddle can support both synchronize stochastic gradient descent (SGD) and asynchronous SGD.
The training of synchronous random gradient descent for neural network can be archieved by cooperation of trainers and parameter servers.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

achieved

@putcn
Copy link
Contributor Author

putcn commented Mar 15, 2018

thanks @abhinavarora, typo fixed.

Copy link
Contributor

@abhinavarora abhinavarora left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@abhinavarora abhinavarora merged commit e382e42 into PaddlePaddle:develop Mar 15, 2018
@putcn putcn deleted the translate-distributed-training branch April 25, 2018 00:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

2 participants