Skip to content

Conversation

@tonyyang-svail
Copy link

@tonyyang-svail tonyyang-svail commented Jan 8, 2018

A simple implementation parallel_do supporting multigpu

... | ParallelDo |	Split input |	Copy parameter to multiple GPUs -	Wait ||||	Forward on multiple GPUs -	Wait |	Merge output -	Wait | ... | ParallelGradDo |	Split output@grad -	Wait ||||	Backward on multiple GPUs -	Wait |	AllReduce parameters -	Wait | ... 

TODO:

@tonyyang-svail tonyyang-svail requested a review from reyoung January 8, 2018 07:52
@chengduoZH
Copy link
Contributor

chengduoZH commented Jan 9, 2018

... | ParallelDo |	Split input |	Copy parameter to multiple GPUs -	Wait ||||	Forward on multiple GPUs -	Wait 

It seems that the first Wait is unnecessary. The current GPU begins forward calculation doesn't depend on whether another GPU has completed the transmission of parameters or not.

@tonyyang-svail
Copy link
Author

@chengduoZH thanks for the suggestion. In the future implementation, we will have IO stream and computation stream. Then the waiting becomes necessary.

Let's target the correctness of this op first. :)

Copy link
Collaborator

@reyoung reyoung left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool

@tonyyang-svail tonyyang-svail changed the title [WIP] feature/parallel_gpu feature/parallel_gpu Jan 10, 2018
@tonyyang-svail tonyyang-svail merged commit 4bcc0b6 into PaddlePaddle:develop Jan 10, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

3 participants