Note
GitHub Code Quality is currently in public preview and subject to change. During public preview, Code Quality will not be billed, although Code Quality scans will consume GitHub Actions minutes.
About GitHub Code Quality
GitHub Code Quality helps users improve code reliability, maintainability, and overall project health by surfacing actionable feedback and offering automatic fixes for any findings in pull requests and on the default branch.
When you enable Code Quality, two types of analysis run:
-
CodeQL quality queries run using code scanning analysis and identify problems with the maintainability, reliability, or style of code. This runs on changed code in all pull requests against the default branch. It also runs periodically on the full default branch.
-
Large Language Model (LLM)-powered analysis provides additional insights into potential quality concerns beyond what is covered by deterministic engines like CodeQL. This runs automatically on files changed in recent pushes to the default branch. These findings are displayed in Code Quality's AI findings dashboard, under the Security tab of the repository.
When a quality issue is detected by either type of analysis, Copilot Autofix suggests a relevant fix that can be reviewed and applied by developers.
On pull requests, Code Quality results are displayed as comments left by the github-code-quality bot, which includes a suggested autofix wherever possible.
LLM-powered analysis for recent pushes
After each push to the default branch, the LLM analyzes recently changed files for maintainability, reliability, and other quality issues. Code Quality inspects your code and provides feedback using a combination of natural language processing and machine learning.
Input processing
The code changes are combined with other relevant, contextual information to form a prompt, and that prompt is sent to a large language model.
Language model analysis
The prompt is then passed through the Copilot language model, which is a neural network that has been trained on a large body of text data. The language model analyzes the input prompt.
Response generation
The language model generates a response based on its analysis of the input prompt. This response can take the form of natural language suggestions and code suggestions.
Output formatting
The response generated by Code Quality is presented to the user directly, providing code feedback linked to specific lines of specific files. Where Code Quality has provided a code suggestion, the suggestion is presented as a suggested change, which can be applied with a couple of clicks.
GitHub Copilot Autofix suggestions
On pull requests, Code Quality results found by code scanning analysis send input to the LLM. If the LLM can generate a potential fix, the github-code-quality bot posts a comment with a suggested change directly in the pull request.
In addition, users can request autofix generation for results in the default branch.
For more information on the suggestion generation process for GitHub Copilot Autofix, see Responsible use of Copilot Autofix for code scanning.
Use case for GitHub Code Quality
The goal of GitHub Code Quality is to:
- Surface code quality issues across your repository, so developers and repository administrators can quickly identify, prioritize and report on areas of risk.
- Accelerate remediation work by offering Copilot Autofix suggestions for results found by scans of the default branch, as well as for findings in recent pushes to the default branch.
- Quickly provide actionable feedback on a developer's code. On pull requests, Code Quality combines information on best practices with details of the codebase and findings to suggest a potential fix to the developer.
Improving the performance of GitHub Code Quality
If you encounter any issues or limitations with suggested fixes on pull requests, we recommend that you provide feedback by using the thumbs up and thumbs down buttons on the github-code-quality bot's comments. This can help GitHub to improve the tool and address any concerns or limitations.
Limitations of GitHub Code Quality
Limitations of Code Quality's LLM-powered analysis
Code Quality's LLM-powered analysis uses the same underlying language model and analysis engine as GitHub Copilot code review. Therefore, it shares similar limitations when analyzing code quality. Key considerations include:
- Incomplete detection
- False positives
- Code suggestion accuracy
- Potential biases
For detailed information about these limitations, see Responsible use of GitHub Copilot code review.
You should always review the findings surfaced by GitHub Code Quality's LLM-powered analysis to verify their accuracy and applicability to your codebase.
Limitations of Copilot Autofix
Copilot Autofix for Code Quality findings won't be able to generate a fix for every finding in every situation. The feature operates on a best-effort basis and is not guaranteed to succeed 100% of the time.
When you review a suggestion from Copilot Autofix, you must always consider the limitations of AI and edit the changes as needed before you accept the changes. You should always carefully review and verify Copilot Autofix suggestions before applying them.
For more information on the limitations of Copilot Autofix, the quality of Copilot Autofix suggestions, and the best way to mitigate its limitations, see Responsible use of Copilot Autofix for code scanning
Provide feedback
You can provide feedback on GitHub Code Quality in the community discussion.
Next steps
See how GitHub Code Quality works on your default branch to surface code quality issues and help you understand your repository's code health at a glance. See Quickstart for GitHub Code Quality.