Skip to content

Detect issues in parallel #421

@aik099

Description

@aik099

Maybe a speed at which large files are processed can be improved by:

  1. tokenizing file in main process
  2. invokening sniffs on token list in parallel in forked processed
  3. reporting back to main process with results

I'm not sure what amount of change is required to do this and what performance can be gained through that.

Of course fixing still should happen sequentially or we maybe can fix several files in parallel.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions