Self-Hosted Deployment Strategies
This page outlines three types of Codecov Self-Hosted deployment that can be used to meet a wide variety of user needs.
Introduction
Being a containerized service, there are numerous strategies that can be utilized to deploy Codecov Self-Hosted successfully. How Codecov Self-Hosted is deployed ultimately depends on a combination of available resources, uptime needs, and planned usage.
This documentation is not intended to be an all-encompassing guide for deploying Codecov Self-Hosted.
Deployment Types
- High Configuration, high availability, high scale -- This is great for deployments that have to scale to meet the needs of many users within extremely large organizations. This deployment strategy is the one employed by Codecov's SaaS offering: codecov.io.
Deployment Strategy: High Configuration, High Availability, High Scale
Recommended for High Scale
The following deployment is a near replica of what is used for Codecov's SaaS offering: codecov.io. It is recommended when high availability and high scale are required, and maintainers don't mind taking on the additional burden of Kubernetes and Terraform.
This Deployment Strategy is recommended for the most demanding of Codecov Self-Hosted deployments. It is managed using Terraform and requires Kubernetes. Codecov has provided an terraform configuration files supporting this deployment for all three major cloud providers: Azure, AWS, and Google Cloud Platform. Those resources and supporting documentation can be found on GitHub
A Note on System Requirements and Resource Needs
There are two main components to consider when determining the performance of the underlying server(s) that will support Codecov Self-Hosted: Web traffic to the Codecov Self-Hosted frontend, Report processing demands.
Since every organization is different, it's impossible to make hard and fast recommendations that will meet every deployment's needs.
As a general building block, Codecov uses Google's n1-standard-4 for deployment of web containers and the n1-standard-8 for deployment of worker containers (see: Google Machine Types) .
By far the performance bottleneck on Codecov will be coverage report processing time. Therefore it's helpful to consider resource needs in terms of number of reports processed per minute. If your install will initially process 25 or less reports per minute (i.e., 25 or fewer commits running through a codecov enabled CI/CD per minute), it is recommended to deploy the equivalent of at least 1 n1-standard-8 for all infrastructure. Once deployed, the performance of each instance can be monitored and scaled up or down as needed.
Updated almost 2 years ago