NOTE: We do not formally offer this service to our customers. This page is a product of a proof-of-concept.
This page describes the process of migrating a self-hosted Sourcegraph deployment to a Managed Instance. A team member from the Customer Engineering (CE) team may request a migration on behalf of a customer. The technical steps are carried out by the Delivery Team.
- CE creates an issue with the customer migration template in the
- Delivery provides a Level of Effort (LOE) within the migration request.
- Delivery Engineering Manager (EM) and Product Manager (PM) will coordinate and schedule Delivery resource for migration.
- Delivery resource performs the migration according to the Migration Procedure below.
Within the Customer Migration issue provided by CE, add the following information:
- Upgrade path with time estimate. Sourcegraph must be updated one minor version at a time. Identify the upgrade path from the customer’s self-hosted version to the version that is currently deployed to managed instance. Generally you can estimate 20-30 minutes per minor version upgrade.
- Resource estimate.
This information should be contained in the managed instance request template. Reach out to the CE for more details if needed.
Create a Managed Instance using the
VERSION of the customer’s self-hosted installation. This is provided in the migration request.
Load the customer data into the managed instance from the backup.
export $NEW_DEPLOYMENT=<red or black> export $PROJECT_PREFIX=managed export $CUSTOMER=<customer slug>
CE will provide the location of the encrypted payload. Using the GPG key in 1Password, decrypt the payload.
gpg --decrypt name.tar.gz.asc -o name.tar.gz
gcloud scp to copy decrypted tarball. Replace
name.tar.gz with the name of the tarball.
gcloud compute scp --project "$PROJECT_PREFIX-$CUSTOMER" --tunnel-through-iap name.tar.gz root@default-$NEW_DEPLOYMENT-instance:~/name.tar.gz
Use gcloud ssh to get as shell into the managed instance
gcloud compute ssh --project "$PROJECT_PREFIX-$CUSTOMER" --tunnel-through-iap $NEW_DEPLOYMENT/docker-compose/docker-compose.yaml root@default-$NEW_DEPLOYMENT-instance:/deployment/docker-compose/docker-compose.yaml
Stop all containers, we don’t want state being modified during the data loading
cd /deployment/docker-compose docker-compose down
untar the tarball
tar -xzvf name.tar.gz.asc
mv pgsql_dump.sql /tmp/pgsql_dump.sql
docker-compose.yaml to mount the PostgresSQL dump into the container
add the following line to the
volumes: stanza for
volumes: - /tmp/pgsql_dump:/tmp/pgsql_dump
- docker-compose -d up
Choose the appropriate value for <container_name>
pgsqlis the frontend database container
codeintel-dbis the Code Intel database container
docker exec -it pgsql /bin/sh
When Sourcegraph first starts, either the
migrator will run initial database migrations.
Since we are loading database dumps that contain
CREATE DATABASE instructions, we need to undo the initial creation.
# Start a connection to postgres pgsql -U sg
// connect to the postgres database, cannot delete the schema that is currently selected \c postgres // delete the existing database DROP DATABASE sg; // create an empty schema CREATE DATABASE sg; // quit the pgsql session \q
psql -U sg sg < /tmp/pgsql_dump.sql
Once the database dump is loaded successfully, exit the database container shell session.
Some customers may only provide the “pgsql” or frontend database dump. If the customer provides a Code Intel database dump, repeat the procedure from steps 3F to 3K, replacing
Edit the Site Configuration JSON file provided by the customer.
- Set the
externalUrlto the new value matching the convention
- Configure the alerts similarly to Step 27 in the Creating a managed instance procedure.
docker-compse up -d
- Verify all containers are running.
frontendlogs, checking for critical errors.
Take a snapshot at this point to preserve the state of the instance before proceeding any further. In case a mistake is made later in the procedure, a known starting point will be preserved.
Sourcegraph must be upgraded 1 minor version at a time.
- Perform in-place upgrades from the customer’s self-hosted version to the latest version deployed to managed instances.
Repeat in-place upgrade process for upgrade path (Glovo’s listed below)
This will require interpretation and will be unique for every migration. Use your best judgement, however it may be helpful to evaluate site health by examining the Grafana dashboard for any existing alerts, examine service logs.
Take a snapshot after resolving any critical alerts or site configuration issues. This will provide a known starting point before transferring over to the customer. If the customer makes a mistake between the time of handover and the next automated snapshot, a restore point will be available.
- Notify CE that managed instance is ready for customer.
- Notify CS that migration has been completed for customer, CS updates their notes.