Creating a managed instance

Creating a new managed instance involves following the steps below. For basic operations like accessing an instance for these steps, see managed instances operations what if there is some text here.


Follow to install mi2

git clone
cd cloud


See flow chart

  1. Set environment variables
  2. Check out a new branch
  3. Init deployment artifacts - GCP Project
  4. Init deployment artifacts - Infrastructure
  5. Init deployment artifacts - K8S
  6. Deploy application
  7. Commit your changes

Set environment variables

export SLUG=company
export ENVIRONMENT=dev

Check out a new branch

git checkout -b $SLUG/create-instance

Init deployment artifacts - GCP Project

mi2 generate will

  • generate the terraform module and prompt you to apply the terraform module
  • generate the kustomization manifests and helm override based on output from the terraform module
mi2 generate -e $ENVIRONMENT --domain $DOMAIN --slug $SLUG

Above command will fail on the first run, follow the prompt to manually apply the terraform module or you can just run the command below

Before applying the terraform modulel, gather the computed values and configure them as environment variables

export INSTANCE_ID=$(mi2 instance get -e $ENVIRONMENT --slug $SLUG | jq -r '')
export PROJECT_ID=$(mi2 instance get -e $ENVIRONMENT --slug $SLUG | jq -r '.status.gcpProjectId')

Apply the project terraform module

cd environments/$ENVIRONMENT/deployments/$INSTANCE_ID/terraform/project
terraform init
terraform apply

Init deployment artifacts - Infrastructure

Rerun the generate command to generate the infra terraform module.

mi2 generate -e $ENVIRONMENT --domain $DOMAIN --slug $SLUG

Above command will fail again, run the command below to manually apply the infra terraform module.

cd environments/$ENVIRONMENT/deployments/$INSTANCE_ID/terraform/infra
terraform init
terraform apply

Init deployment artifacts - K8S

Rerun the generate command to generate the kustomize manifests and helm overrides (it shouldn’t error out again)

mi2 generate -e $ENVIRONMENT --domain $DOMAIN --slug $SLUG

Deploy application

Connect to the cluster locally by running

cd environments/$ENVIRONMENT/deployments/$INSTANCE_ID/terraform/infra
export CLUSTER_NAME=$(terraform show -json | jq -r '.. | .resources? | select(.!=null) | .[] | select((.type == "google_container_cluster") and (.mode == "managed")) |')
gcloud container clusters get-credentials $CLUSTER_NAME --region us-central1 --project $PROJECT_ID

Deploy the manifests

cd environments/$ENVIRONMENT/deployments/$INSTANCE_ID/kubernetes
kustomize build --load-restrictor LoadRestrictionsNone --enable-helm . | kubectl apply -f -

Commit your changes

git add .
git commit -m "$SLUG: create instance"

Create a new pull request and merge it


PVC is stuck at Pending

You may be seeing the following events

failed to provision volume with StorageClass "sourcegraph": rpc error: code = InvalidArgument desc = CreateVolume failed to pick zones for disk: failed to pick zones from topology: need 2 zones from topology, only got 1 unique zones

This is due to there is only one worker node available in the pool, we can force worker pool to scale up by deployment the below pause pod.

kubectl apply -f

Don’t forget to clean it up after PVCs are provisioned.

kubectl delete -f

How do I check my Sourcegraph deployment rollout progress?

We do not use ArgoCD for V2 MVP

Visit ArgoCD at

I am unable to apply the terraform module due to permission error.

roles/owner definitely grants excessive permissions, but it would make developing greenfield project much easier and we will revist permissions at a later day.

Ensure the Google Group you belong to is present here Otherwise, consult GCP access process to obtain access.

Any other questions?

Please reach out to #cloud