Skip to content

Whether Fluid distributed training should do cost merging in parameter server? #7671

@typhoonzero

Description

@typhoonzero

From Paddle v1 implementation, I see https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/pserver/ParameterServer2.cpp#L1228
here have operations to merge the cost sent from every trainer, but not sure why we need this and do we also have to implement this in current fluid distributed training framework?

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions