Skip to content

Conversation

@JukkaL
Copy link
Collaborator

@JukkaL JukkaL commented Dec 18, 2024

In some cases gc was consuming a significant fraction of CPU, so run gc less often.

This made incremental checking of torch 27% faster for me (based on 100 measurements), and also speeds up incremental self check by about 20% and non-incremental self check by about 10%. All measurements were on Python 3.13.

In some cases gc was consuming a significant fraction of CPU, so run gc less often. Also tune gc very early, to run gc less during import time -- not sure if this has any significant impact, but in an unrelated project this was somewhat helpful. This makes incremental checking of torch 27% faster (based on 100 measurements), and also speeds up self check by about 10% (both incremental and non-incremental).
@JukkaL
Copy link
Collaborator Author

JukkaL commented Dec 18, 2024

Don't merge yet -- I'm still performing some additional measurements. I'm not sure if tuning very early is actually useful.

@github-actions

This comment has been minimized.

@JukkaL
Copy link
Collaborator Author

JukkaL commented Dec 19, 2024

I simplified the PR, since tuning GC early before most imports seemed to make this a little bit slower instead of faster.

@JukkaL JukkaL changed the title Use more aggressive gc thresholds, and tune earlier Use more aggressive gc thresholds for a big speedup Dec 19, 2024
@github-actions
Copy link
Contributor

According to mypy_primer, this change doesn't affect type check results on a corpus of open source code. ✅

@JukkaL JukkaL merged commit 823c0e5 into master Dec 19, 2024
19 checks passed
@JukkaL JukkaL deleted the faster-tune-gc branch December 19, 2024 18:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

2 participants