Case Review

The customer support case review practices allow support engineers to not only receive feedback on their casework but also actively collaborate with their managers in the ongoing calibration and execution of case reviews.

These practices are:

  • Peer reviews (similar to the idea of code reviews)
  • Manager reviews (which include self-review)

Peer reviews (bi-weekly cadence)

This is similar to the idea of code reviews in the sense that support engineers would review cases for their teammates based on our guiding principles and the individual measures of success.

The goal of this practice is to provide an opportunity for support engineers to have conversations that honour our intention to deliver a memorable customer experience to clients on a regular basis. It is not intended to be a performance assessment tool or a report card.

This process (inspired in part by peer review practice at Wistia) would look like this:

  1. Each support engineer is randomly matched with another support engineer every week (draws would be announced on #customer-support-internal every Monday). Pairing is currently not possible due to the team’s count being an odd number (13) so support engineers may give reviews to and receive them from different team members each week.
  2. Each support engineer selects one case for the team member that they draw at the time of review that week. The selected case could be:
    • Any active (open) case on the support engineer’s queue - The idea behind this is that it would provide an opportunity for support engineers to use the feedback that they have received to deliver a positive outcome on the case being reviewed. That ship would have sailed for a closed case.
    • Any case that was resolved within seven (7) days of the current review cycle or is in ‘solved’ status - While reviewing such cases might only have retrospective value, it ensures that support engineers are not missing out on valuable coaching opportunities because we are focused only on active cases. These cases would also be recent enough that the details are fresh in the case owner’s mind.
  3. Team members review the case that they have selected. The reviewer and the case owner have the freedom to decide whether this happens synchronously or asynchronously. This would generally go as follows:
    • The reviewer makes a copy of the review template (rubric or document) in our shared peer review folder and moves it to the folder belonging to the support engineer whose case they are reviewing.
    • The reviewer renames the copied review template to match the date of the first day (Monday) of the current review cycle. For example, .
    • The review header section is filled out to reflect the case ID, names of the reviewer and the case owner.
    • The reviewer assigns each review criterium a value of ‘yes’ or ‘no’ and leaves comments in the relevant cell (or line/paragraph if the document is preferred to the rubric) to help the case owner understand their decision/thought process.
      • Note: While technical review is currently outside the scope of this practice, reviewers are welcome to leave comments that address this side of things in a separate section if they first confirm with their teammates that they would find this valuable.
    • The reviewer notifies the case owner (via slack, an assigned comment, or whatever works best) that the review has been completed.
    • The case owner may collaborate with the reviewer within the comment thread to ask questions regarding their decision or share context. It is also fine if they prefer to do this synchronously.
    • Support engineers are also welcome to reach out to their managers if they feel like they could benefit from a second opinion on the review that they have received from or intend to give to a teammate and it is only natural if they find that they require this a lot in the early days of this practice.
    • Feedback should be delivered in an honest, respectful, and compassionate manner (please see our company code of conduct, guidelines on conflict resolution, and this piece on having difficult conversations for guidance). Folks may also find it helpful to check the team’s readme section to understand how their teammates prefer to receive feedback but it is also perfectly fine to just ask.

The outcome of this practice would be that each support engineer:

  • Gives and receives one (1) peer review each week (every other week at first).
  • Feels supported by their manager both in giving and receiving reviews as the need arises.

Self Review + Manager Reviews (Monthly Cadence)

This 1:1-type review would involve a self-assessment by the support engineer using a variation of the peer review template (rubric or document) which is then validated by their manager. This practice would be in addition to the regular proactive coaching on open cases that their managers provide on a weekly basis.

The process would look like this:

  1. At the end of each month, each manager randomly selects an open ticket for each member of their team and invites them to present another one of their choosing.
    • Ideally, these two cases would be different from the ones selected for peer review during the month.
  2. Each support engineer self-assesses on the two (2) selected cases as follows:
    • They assign each review criterium a value of ‘yes’ or ‘no’ and leave comments in the relevant cell (or line/paragraph if the document is preferred to the rubric) to help their manager understand their decision/thought process.
    • They notify their manager (via slack, an assigned comment, or whatever works best) that the review has been completed.
    • Their manager reviews their assessment and offers feedback as necessary. They may collaborate within the comment thread or synchronously during 1:1s or another meeting dedicated to this practice.

The outcome of this practice would be that each support engineer:

  • Receives two (2) self + manager reviews each month.
  • Gets coached to become more aware of their impact and level up their case reviews skills.