Managed Instance v2.0

This documentation details significant changes of Managed Instance v2.0 comparing to the previous version.

Unless we explictly call it out, you may assume things are unchanged.

Learn more from:

Architecture

The largest architecture changes are moving from a standalone VM to GKE. Learn more from our Cloud v2 diagrams or update it.

Postgres

SOC2/CI-79

Postgres database now uses a single [Cloud SQL] instance, which is a fully managed service by GCP. It provides fully automated daily backup with point-in-time-recovery and retains for 7 days. We also have on-demand backup prior to upgrade to provide fallback plan for unanticipated events.

GKE

SOC2/CI-79

All services of a Cloud instance are running on a dedicated GKE cluster. We utilize Backup for GKE to provides fully automated daily backup with retention set to 90 days. The backup includes all production disks and application state. Additionally, backup is always taken prior to upgrade or other major operation.

Deployment Environments

SOC2/CI-100

Deployment artifacts are stored in a centralized GitHub repoistory sourcegraph/cloud. Each enviornment is namespaced under environments/$env. A centralized repo makes sharing global configuration much easier comparing to having multiple repo.

Learn more from diagram

Development (dev) environment

All dev projects are created under the Sourcegraph Cloud V2 Dev GCP project folder and environments/dev directory in the sourcegraph/cloud repo.

This is our internal development environment. All dev deployment should be short-lived and they should always be teardown when they are no longer needed.

All engineering teammates are allowed to create instances and perform experiment under the dev environment. Access in general is unrestricted.

Production (prod) environment

All dev projects are created under the Sourcegraph Cloud V2 Prod GCP project folder and environments/prod directory in the sourcegraph/cloud repo.

Access to prod environment is restricted and follow our access policy.

This is our production environment and consists of internal and customer instances. All prod deployment is long-lived.

Below is a list of long-lived internal instances:

Internal instances are created for various testing purposes:

  • testing changes prior to the monthly upgrade on customer instances. upon a new release is made available, Cloud team will follow managed instances upgrade tracker (this is created prior to monthly upgrade) to proceed with upgrade process.
  • testing significant operational changes prior to applying to customer instances
  • long-lived instances for product teams to test important product changes, e.g. scaletesting.

All customer instances are considered part of the prod environment and all changes applied to these customers should be well-tested in the dev environment and internal instances.

s2 instance

This is the internal Cloud dogfood instance for the entire company. #discuss-cloud-ops is responsible for rolling out nightly builds on this instance. Additionally, they are responsible for the maintenance of infrastructure, including Cloud SQL and underlying VM.

Operation playbook: go/s2-ops

Deployment status: go/s2-deploy

Playbook

The following processes only apply to Cloud v2.0:

How to work with Cloud instances?

Please visit go/cloud-ops to locate the instance you would like to access, then you will find instructions for:

  • access database
  • view logs
  • work with the k8s deployments and access containers to troubleshoot problem

How to request access to Cloud instances UI?

Learn more from Request access to Cloud instances UI

How to locate a Cloud instance in the deployment repo?

Please visit go/cloud-ops to locate the instance.

How to update & apply terraform modules?

Please visit go/cloud-ops, and follow instruction from Deploy terraform changes section.

How to use a fork of cdktf-cli?

Sometime there is bugs (e.g. hashicorp/terraform-cdk#2397, hashicorp/terraform-cdk#2398) in the upstream and we have to maintain our own fork of cdktf-cli.

Use the fork in GitHub Actions, modify the setup-mi2 action to reference the fork and pin to a specific commit, branch, or tag.

https://github.com/sourcegraph/cloud/blob/64d3ddfb2ecbff5c1a200aa8ac981ff1a48abf5e/.github/workflows/mi_create.yml#L97-L106

- name: setup mi2 tooling
  uses: ./.github/actions/setup-mi2
  with:
    # Add a comment explain why a fork is required
    # cdktf-version: 0.13.3
    cdktf-repository: sourcegraph/terraform-cdk
    cdktf-ref: fix/tfc-planned-status

Use the fork locally:

gh repo clone sourcegraph/terraform-cdk
cd terraform-cdk
yarn install
yarn build
# in your shell config file or within the terminal session
alias cdktfl=/abspath-to-terraform-cdk-repo/packages/cdktf-cli/bundle/bin/cdktf

Then replace all cdktf command with cdktfl