Deployment using Terraform or Helm for AWS, GCP, and Azure
Full deployment scripts using Terraform or Helm can be found for AWS, GCP, and Azure here: https://github.com/codecov/enterprise-resources
Codecov Self-hosted does not endorse or support custom deployments beyond AWS, GCP, and Azure.
|Basic Ingredients List||1. Git-based code host (GitHub.com, Github Enterprise, Gitlab Community Edition, Gitlab Enterprise Edition, Bitbucket, Bitbucket Server)|
2. Coverage reports generation
3. A CI provider
|Installation Lead||Installing Codecov requires deep knowledge of your organization’s infrastructure and how it is implemented, including your CI and Security Configurations. Oftentimes these stakeholders reside on the Operations or SRE team.|
|Access Controls||In order to complete the Codecov install your team will need an account key (provided by Codecov), access to object storage, your Kubernetes Cluster (if applicable) / Compute, Source Control Provider, your Database, and Redis.|
|Deployment Management||Terraform, or, for testing an install, Docker Compose|
|Software Distribution||Access to DockerHub from within your network|
|Hardware||Managed virtual private cloud (AWS, GCP, Azure) with min. 1 machine for installation testing|
|Database||Postgres 10.X LTS via managed cloud (e.g., RDS, CloudSQL, or Azure PostgreSQL)|
|Cache||Redis via managed services (e.g., ElastiCache)|
|Storage||S3 compatible storage (S3, GCS, Azure Blob Storage, Ceph, Minio)|
|Codecov License Key||Provided by the Codecov team, which you can request here|
S3 Compatibility Requirements
Please note, to be able to use any S3 / minio compatible storage, you must be able to grant at least the following policies. The application will not work if any of these cannot be granted.
"s3:GetObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts", "s3:GetBucketLocation", "s3:HeadBucket", "s3:ListBucket", "s3:ListBucketVersions"
|Approximate peak uploads per hour||< 100||< 1,000||< 2,500||5,000+|
|Approximate peak report size||< 50 MB||< 100 MB||< 300 MB||< 1 GB|
|Web (minimum)||2 instances, RAM optimized - 2vCPU / 8 GB RAM||3 instances, RAM optimized - 2vCPU / 8 GB RAM||6 instances, RAM optimized - 2vCPU / 8 GB RAM||9 instances, RAM optimized - 2vCPU / 8 GB RAM|
|API (minimum) currently using about 30-40% of Web capacity. API will handle more responsibilities in the future. Resource usage might increase.||2 instances, RAM optimized - 2vCPU / 8 GB RAM||2 instances, RAM optimized - 2vCPU / 8 GB RAM||4 instances, RAM optimized - 2vCPU / 8 GB RAM||6 instances, RAM optimized - 2vCPU / 8 GB RAM|
|Worker (minimum)||3, compute optimized - 4vCPU / 8 GB RAM||9, compute optimized - 4vCPU / 8 GB RAM||15 instances, RAM optimized - 4vCPU / 8 GB RAM||23, compute optimized - 4vCPU / 8 GB RAM|
|Redis (non clustered only!)||1 cpu / 1.5 GB||2 cpu / 3 GB||2 cpu / 6 GB||4 cpu / 14 GB|
|Database (PostgreSQL)||2 cpu / 5GB RAM||6 cpu / 16 GB RAM||8 cpu / 26 GB RAM||16 cpu / 40 GB RAM|
It is highly recommended to set up infrastructure monitoring from Day 1 of usage of Codecov Self-hosted in Grafana or a similar reporting tool.
Key metrics to monitor:
- Celery Queue Size should always be trending towards zero.
- CPU usage on the worker node. We recommend no more than 2 workers per compute node, due to high CPU requirements.
- Storage growth. Depending on the frequency of uploads and the report size, be sure to allocate sufficient DB storage.
- Due to resource consumption requirements, and different needs to prioritize CPU over RAM, it is strongly recommended to separate web and worker instances.
Read more about available StatsD metrics here
- Receive an Enterprise License key from Codecov staff and placed it into your codecov.yml
- Create an OAuth level integration in your repo service provider and have the Client ID and Client Secret ready to use.
- If using GitHub or Github Enterprise, you will have created a GitHub App Integration as well.
- Create an external, managed, postgres database (e.g., AWS RDS, Google Cloud SQL, etc) and have the url with username and credentials in your codecov.yml
- Create an object storage mechanism (e.g., an AWS S3 bucket, a Google Cloud Storage Bucket, etc) and have the bucket name on hand, plus credentials to supply to the codecov.yml
- Caveat: If you’re using S3, you can instead ensure codecov runs on a VM with a StorageAdmin S3 role, or using a suitably permissioned* S3 role.
- Create a separate, managed Redis database (e.g., AWS Elasticache, etc) and have the credentials to supply to the codecov.yml.
- Supply the needed configuration derived from the above steps into the codecov.yml
- Navigate to our configuration repo: https://github.com/codecov/enterprise-resources
- Choose the configuration preset that makes the most sense for your infrastructure and needs.
- Follow the instructions there to ensure and ensure your codecov installation is working by navigating to it in your browser.
Read more about Enterprise Deployment Strategies meant for production use.
Updated about 2 months ago