feat(pypi): generate filegroup with all extracted wheel files #3011
Add this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.
Adds a filegroup with all the files that came from the extracted wheel.
This has two benefits over using
whl_filegroup
: it avoids copying the wheeland makes the set of files directly visible to the analysis phase.
Some wheels are multiple gigabytes in size (e.g. torch, cuda, tensorflow), so
avoiding the copy and archive processing saves a decent amount of time.
Knowing the specific files at analysis time is generally beneficial. The
particular case I ran into was the CC rules were unhappy with a TreeArtifact
of header files because they couldn't enforce some check about who was
properly providing headers that were included (layering check?).
Another example is using the unused_inputs_list optimization, which allows
an action to ignore inputs that aren't actually used. e.g. an action could
take all the wheel's files as inputs, only care about the headers, and then
tell bazel all the non-header files aren't relevant, and thus changes to
other files don't re-run the thing that only cares about headers.