We consider this page to be a living document for all our internal processes. Our processes evolve and change over time to fit business and team needs.
The purpose of the Sprint Review is to inspect the outcome of the Sprint and determine future adaptations.
During the event, we review what was accomplished in the Sprint and what has changed in their environment. Based on this information, attendees collaborate on what to do next. The Product Backlog may also be adjusted to meet new opportunities.
The meeting notes of Sprint Reviews can be found our Sprint Review Google Doc.
We plan and track our day-to-day work on our GitHub board. The project is separated into multiple views: Current iteration, Next iteration (both grouped by issue status) and a separate board for each workstream (IDE, Browser Extensions, etc).
Our current process is as follows:
We work in 2-week iterations. Every iteration has a GitHub milestone, which is created at least one iteration in advance.
While an iteration is ongoing, we plan the next iteration. This is a mostly asynchronous process.
Engineers, designer, PM and EM propose issues to be worked on before the Planning Sync by adding them to the next iteration milestone and setting “Status” to “Proposed”.
As much as possible, the proposer involves the necessary stakeholders asynchronously to get agreement on whether the issue should be worked on in the next iteration before the Planning Sync. For example, the PM or EM might ping engineers in GitHub or Slack on whether an issue seems feasible, or engineers might ping their EM and PM to get buy-in whether the issue fits into our goals.
Teammates can reorder proposed issues on the “Next iteration” board before the Planning Sync to their liking. The order at the time of the Planning Sync is the proposed order.
At the Planning Sync, we take a look at the proposed issues together on the “Next iteration” view to reach agreement on the set of iteration issues, their assignees, and order, taking into account our goals and roadmap, technical aspects, estimates, workloads on individuals, and release dates.
During an iteration, teammates work on their assigned issues for the iteration in the order they are listed in the “Current iteration” view of the board. When starting work on a task, the teammate updates its status column to “In Progress” to communicate it to their team. This gives a good overview in the “Current iteraton” view, which can also be viewed in Kanban layout, on how the iteration is tracking.
If one or more issues that were planned for an iteration are looking to not get finished (which includes testing) in the current iteration (while maintaining sustainable work practices) the assignee raises this as soon as possible asynchronously to the team (including the PM and EM), e.g. on the GitHub issue or Slack. These issues then become proposed issues for the next iteration (meaning nothing carries over automatically, but we also don’t just drop and forget missed issues).
For IDE Extensions and Browser Extensions, we release at the end of every sprint (two-weeks).
For Sourcegraph Extensions & other core dependent work we follow our monthly release schedule. We also intentionally plan so we can avoid merging significant work less than two days before a release (if a release is on the 20th, our last day to merge is the 18th). Exceptions require explicit approval of both the PM and EM.
The purpose of the Sprint Retrospective is to plan ways to increase quality and effectiveness.
The team inspects how the last Sprint went with regards to individuals, interactions, processes, tools, and their Definition of Done. Inspected elements often vary with the domain of work (eg. releasing browser extension is different than making updates to the extension marketplace). Assumptions that led them astray are identified and their origins explored. The team discusses what went well during the Sprint, what problems it encountered, and how those problems were (or were not) solved. Please keep in mind that acknowledging positive events is as important as capturing negative events.
These discussion items are captured on our retrospective document. Once discussed, we work together on identifying the most helpful changes to improve our effectiveness. The most impactful improvements are addressed as soon as possible. They may even be added to the Sprint Backlog for the next Sprint.
After each sprint, there are some manual tasks that are needed to be done before we start a new sprint.
- Create a new Sprint (milestone) on GitHub. The title should be “Extensibility X” where X is +2 from last completed sprint. There should be an explanation with the format “Extensibility Sprint from 2021/11/08 - 2021/11/19”, that starts on a Monday and finished in 2 weeks at Friday. This Friday should be saved as the “Due Date”
- Once the milestone is created, visit the Current Iteration board. If there are items on the Proposed, In Progress, or Blocked group, change their milestone to the upcoming milestone. Once all items are in Done, update the tab view query to filter this new milestone (eg change it from “milestone:“Extensibility 1” to milestone:“Extensibility 2”). Make sure to save the new view.
- Once the Current Iteration board is updated, make sure to update the Next Iteration board query. Don’t forget to save the view.
- Lastly, go back to the Milestones page and find the milestone that has just finished. Make sure that all the items are closed. If there are any open items, make sure to move them to the current milestone and close the milestone.
Specific product feedback about well-defined, small features can be found directly in the corresponding GitHub board tab. (eg Browser Extension)
More general product feedback that applies to larger features, or that needs more research and planning to be actionable, is kept in Productboard.
The team follows the default code review guidelines with the following addition:
- If the author would like any of the requested reviewers to merge the PR after approval they add the label
- If the author would like their PR to be merged once all of the requested reviewers have approved it they add the label
- When there are only minor issues, reviewers are encouraged to give “approval with comments” and trust their teammates to address the comments without requiring a follow-up review.
We use pair programming extensively. We’re huge believers in pair programming in remote work contexts, so we aim to pair as much as possible.
Every week, we spend an hour and a half working on experiments outside of our prioritized lists. Examples of the type of work include extensions we feel strongly about, market intelligence tools, automation scripts, etc.