Skip to content

Conversation

@lattner
Copy link
Contributor

@lattner lattner commented Jul 30, 2018

Enhance constant expressions to handle copy_addr in top level context.
Enhance deabstraction to constantfold scalarToTensor into a Const node.

There is still more to do, all of the warnings in the testcase are bogus.

 Enhance constant expressions to handle copy_addr in top level context. Enhance deabstraction to constantfold scalarToTensor into a Const node. There is still more to do, all of the warnings in the testcase are bogus.
@lattner lattner added the tensorflow This is for "tensorflow" branch PRs. label Jul 30, 2018
@lattner lattner requested review from bgogul and mhong July 30, 2018 23:00
@lattner
Copy link
Contributor Author

lattner commented Jul 30, 2018

@swift-ci please test tensorflow

Copy link

@mhong mhong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

// Dig the element type out of the TensorHandle result type.
auto eltType =
getTensorHandleElementType(inst->getType().getASTType());
auto int32Ty =
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a comment that this is for shape?

or, given SymbolicValue::getArray({}, int32Ty, allocator) is used in a few places, consider making it a util method. the method addScalarShapeArrayAttr() in TFPartition.cpp uses such code as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

auto shapeId = SILTensorOpInfo::OperandClass::Shape;
auto shapeSuffix = SILTensorOpInfo::getOperandClassSuffix(shapeId);
attributes.push_back({
context.getIdentifier(std::string("value") + shapeSuffix), shape
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

given the prev attr is called value$tensor, do we want to call this something like __shape$shape (i'm suggesting __ because this attr is a "pseudo attr" -- it won't be lowered to an op attr in the GraphDef)?

Also, do we still need to add things like $tensor, $shape at this point? IIUC, we'd like to delete TF graph lowering code that relies on such specifiers.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this is a direction I'd love to see! We'll need some of the specifiers, but not nearly all the ones we have today.

}
}

if (auto *cai = dyn_cast<CopyAddrInst>(user)) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry I lost track of whether this chunk of code helps with the test case at hand.

I noticed 4 CopyAddr insts in the output of deabstraction pass, and they seem to be still present in the input to partition pass (when -O is on). Also they are not on tensor values. one example is:

copy_addr %147 to [initialization] %152 : $*Int64 // id: %153 

(If this chunk helps with other test cases, including it in the patch is fine.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is necessary to see through to the constants that get folded.

allocator),
inst->getType(), inst->getLoc(),
DeviceType::ALL, B);
inst->replaceAllUsesWith(constant->getResult(0));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is unrelated to this PR, but about our constant folding implementation. Replacing the use of inst with a constant can trigger more constant folding opportunities. Will that be taken care of?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RAUW won't do that, it just replaces the instruction itself. Remember that in this case we're producing a Const node, not something the generic SIL optimizer knows about. It doesn't handle tensorflow level constant folding.

@lattner
Copy link
Contributor Author

lattner commented Jul 31, 2018

@swift-ci please test tensorflow

1 similar comment
@lattner
Copy link
Contributor Author

lattner commented Jul 31, 2018

@swift-ci please test tensorflow

@lattner lattner merged commit 53ab5c1 into tensorflow Jul 31, 2018
@lattner lattner deleted the deabstraction-o0 branch July 31, 2018 04:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

tensorflow This is for "tensorflow" branch PRs.

4 participants