Skip to content

Conversation

@dzhwinter
Copy link
Contributor

  • refine op_test framework, move and refine some standalone function as the test suite.
  • add benchmarkSuite, for checking the number stability, gpu cpu speed.
@dzhwinter dzhwinter changed the title Feature/python benchmark Big data op_test benchmark, for checking output consistent in different runs. May 17, 2018
@dzhwinter
Copy link
Contributor Author

change the op_test input data shape, accelerating the mul_op test from
38.579s -> 0.694s
image

@jacquesqiao
Copy link
Member

Maybe the change of mul_op test can be separated into another PR?

@dzhwinter
Copy link
Contributor Author

dzhwinter commented May 22, 2018

This PR is mainly focused on the big input data test case, that's why I noticed the mul_op time cost sucks. But that's ok if you think that we should separate mul_op into a new PR.

start = time.time()
for i in range(iters):
callback(*args, **kwargs)
elapse = time.time() - start
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+=? shoud elapse be initiated before?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

elif dtype == np.bool:
return core.VarDesc.VarType.BOOL
elif dtype == np.uint16:
return core.VarDesc.VarType.INT16
Copy link
Contributor

@panyx0718 panyx0718 May 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uint16 and int16 is different?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is obvious you are right, but we have to use INT16 here, because

  1. we do not have unsigned data type in OpProto, both for uint16, uint32 and uint64.
    https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/framework.proto#L97

  2. the uint16 in this PR is only for float16 in op_test, it seems a little confusing but it is the fact.
    https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/tests/unittests/op_test.py#L473
    The guy who do the fp16 job explain the reason, "the pybind does not have the built-in float16 support, so he chooses INT16 to allocate the same size memory".
    Considering that our messed datatype in python, it definitely needs to be clean up.

Block. Users can use the build in instructions to describe their neural
network.
"""
OP_WITHOUT_KERNEL_SET = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This set is sad...Can you file a issue and assign to me to clean it up? So that I don't forget.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

def customize_fetch_list(self):
"""
customize fetch list, configure the wanted variables.
>>> self.fetch_list = ["Out"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a tab?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The output of operator will be automatically inserted into fetch list, it is same here.

Copy link
Collaborator

@reyoung reyoung left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent!

@dzhwinter dzhwinter merged commit f7c96f0 into PaddlePaddle:develop Jun 7, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

4 participants