- Notifications
You must be signed in to change notification settings - Fork 1.5k
Closed
Labels
Milestone
Description
Maybe a speed at which large files are processed can be improved by:
- tokenizing file in main process
- invokening sniffs on token list in parallel in forked processed
- reporting back to main process with results
I'm not sure what amount of change is required to do this and what performance can be gained through that.
Of course fixing still should happen sequentially or we maybe can fix several files in parallel.