[Beta] Automated Test Selection
Speed up your CI by running only the tests you need, not your entire test suite.
Automated Test Selection is in Open Beta for Python
Automated test selection currently only works for Python, but more languages are on the way!
How can I use it?
Codecov's Automated Test Selection works within your CI pipeline to identify tests that are sufficient to execute the diff of your change. Use the following guides to get setup with Automated Test Selection in your pipeline of choice.
- Using Github Actions as your CI? Get set up in minutes using our purpose-built Github Action. Check out Automated Test Selection with Github Actions
- Don't use Github Actions? Check out Getting Started with Automated Test Selection on your CI
- Not sure if this is right for you? Why not try this out locally first? Check out Getting Started with Automated Test Selection - General Guide
What is Automated Test Selection?
Automated Test Selection is a Codecov feature that ensures your CI pipeline only runs the tests needed to completely exercise the diff of a commit. Automated Test Selection is intended to greatly reduce the time needed to run test suites without compromising on test coverage or having to invest in creating the complex CI pipelines and tools (e.g., Bazel) that are typically required to facilitate test selection.
Automated Test Selection is comprised of 3 parts that work together to ensure your commit diff is properly exercised:
- Static Analysis - this feature is responsible for giving Codecov information it needs to correctly select the tests that will be run for a given commit. Read more about Static Analysis.
- Label Analysis - this feature effectively analyses a given list of test names (labels) and leverages the git diff and the static analysis information about the code to generate a list of tests that need to be run in order to ensure the diff of the commit has been exercised. Read more about Label Analysis
- Codecov CLI - This is the interface through which users can interact with the static analysis and label analysis features. Typically this is used in the CI. Read more about The Codecov CLI
How Does it work?
Testing tools typically have a stage of collecting tests followed by a stage of executing tests. The idea behind Automated Test Selection is to live exactly between those 2 stages as a filter to decide which tests necessarily need to be executed, considering the diff of the commit being tested.
The general flow of usage of Automated Test Selection is the following:
- The CI runs static analysis in the new commit and uploads that information to Codecov
- The tests names (labels) are collected and that list is uploaded to Codecov, along with the commit we’re comparing the current HEAD to (usually HEAD^). The commit “compared to” is referred to as BASE
- Codecov will run an analysis on the labels sent leveraging the git diff HEAD-BASE, static analysis information and previous test runs to generate a minimal subset of those labels that are enough to fully exercise the diff.
- Codecov sends this subset of labels back to the CI that then executes the tests
- Test coverage is uploaded to Codecov with labels context
Some things to note about the process outlined above:
- There's some overhead compared to simply running tests and uploading results to Codecov.
- The static analysis information needs to be uploaded to Codecov for every commit.
- The static analysis information needs to be uploaded before the label analysis can be executed
- For a given set of labels, providing different commits as the BASE will generate different results, because the diff will be different
Pre requisites
- A repository that is actively uploading coverage report to Codecov, preferably via our robust new CLI
- A Python project that uses the pytest library to run tests
- Flags - in the guides, we also show you how to setup a Flag. You can learn more about flags here
Caveats
We err on the safe side
Label-analysis is not perfect, so it might not be possible for us in some cases to be certain if a given tests needs to be run or not. We only tell you not to run a given test if we are 100% certain it doesn't need to be run.
This means that in some scenarios we may tell you to run more tests than you actually need to run.
FAQ
Right now Automated Test Selection is still in active development, so things are changing fast and many questions arise. So we’re still building this portion of the documentation - if you have questions or comments you can suggest changes on this document or reach out to [email protected] to send feedback directly to Codecov's product team.
Tried integrating Automated Test Selection and is running into issues? You can try our 🔥Troubleshooting page.
Updated 10 months ago