The purpose of a calibration session during our impact review process is to ensure that evaluations of teammate performance are fair and consistent across the organization. Calibration sessions typically involve a group of managers or supervisors who meet to review and discuss individual teammate performance ratings, in order to reach a consensus on the accuracy and fairness of those ratings. The goal is to identify any potential biases or discrepancies in the ratings, and to make adjustments as necessary to ensure that all employees are evaluated fairly and objectively.
Calibration sessions can also help identify areas where additional training or development may be needed for teammates, as well as opportunities for recognition and rewards for exceptional performance. Ultimately, the goal of a calibration session is to ensure that the performance review process is transparent, consistent, and fair, and that teammates are evaluated based on their actual job performance, rather than any personal biases or perceptions of their work.
In preparation for your Calibration Session with your People Partner, all Managers are required to complete the following pre-work:
- Complete all downward reviews for your direct reports utilizing the SBI model. Click here to review best practices for Impact Review writing. Ensure you have reviewed the peer feedback that they received to incorporate into your review.
- Review new rating guides along with expected distribution of each new rating.
- Log into Lattice and use the tag/note function in the Lattice calibration module. You can find instructions on how to use the tag/notes section in this training video
- In Lattice, Managers are expected to place each of their direct reports into one of the following categories for Performance: Distinguised Performance, Superior Performance, Meets Performance Expectations, Partially Successful Performance, Unsuccessful Performance. Additionally, Managers will be required to note who they are considering putting up for promotion.
- Review the below resources:
- Read this entire handbook page start to finish and educate yourself on “mitigating unconcious bias”
- Review our new rating definitions and have a clear understanding of the difference between Distinguised Performance, Superior Performance, Meets Performance Expectations, Partially Successful Performance, and Unsuccessful Performance.
- Watch this calibration training video
- Review your team’s career framework to ensure you’re well versed on levels… you can also review our company-wide level framework here
- Department Managers, Directors, and VPs
- People Partner
- (Optional) VP, People and Director, Recruiting
- (Optional, as invited) Cross-functional Leaders
Meetings are facilitated by the People Team in partnership with each department head.
The following is a typical calibration session agenda. Sessions may slightly differ depending on the size of the group being calibrated, as well as the number of calibration meeting participants.
Performance calibrations provide an opportunity to review and calibrate performance of all Teammates within a specific department or function, as a collective group. Calibration sessions are interactive discussions where:
1Ratings: Managers will explain the rating assessment of each Teammate in correspondence to their level and for those Teammates that received a 1 - Distinguished Performance category for performance, validating the placement with tangible examples. More details.
2Ratings: Managers will explain the rating assessment of each Teammate in correspondence to their level and for those Teammates that received a 1 - Superior Performance category for performance, validating the placement with tangible examples. More details.
4Ratings: Managers will explain the rating assessment of each Teammate in correspondence to their level and for those Teammates that received a 1 - Partially Successful Performance category for performance, validating the placement with tangible examples. More details.
5Ratings: Managers will explain the rating assessment of each Teammate in correspondence to their level and for those Teammates that received a 5 - Unsuccessful Performance category for performance, validating the placement with tangible examples. More details.
Calibrate Potential Outliers: Managers will explain the rating assessment of each Teammate in correspondence to their level and for those Teammates that received a 3 - Meets Performance Expectations category that may be on the fringe of either Superior Performance or Partially Successful Performance
Calibrate Promotion Nominations: Managers will explain any “Yes” submissions for promotions and reach final recommendation in partnership with calibrations participants. See details on promotion calibration.
Validate Final Performance Rating Submissions and Promotions
Note: The purpose of a performance calibration is not to be adversarial, but to work together to align on a set of standards to be applied to all employees during reviews ensuring the same bar is applied consistently across all teams and individuals. Additionally, the VP within the team will ultimately determine the outcome of promotion nominations in partnership with their People Partner.
- Our calibration discussions are an opportunity to focus on aligning on what the rating definitions mean at Sourcegraph. During calibration, Managers will be asked to highlight specific instances where Teammates demonstrate exceptional performance.
- Of the four required questions on our Manager-to-Direct Report Impact Review, we will focus on two as part of the Manager presentation:
- How has this Teammate’s performance mapped to the expectations of their role and level over the past 6 months? (Reference the career development framework if applicable). To what extent did they meet their commitments?
- Our conversations will focus specifically on the area(s) in which the Teammate had Distinguished Performance, Partially Successful Performance or Unsuccessful Performance.
- Managers will be asked to discuss on individuals who report to them and should come prepared to verbally share the following:
- Teammate level/title
- Performance rating
- 2-3 examples of work that supports the rating (the 2 questions bulleted above are a great source for providing these examples!)
- Summary of feedback from peers
- Explanation for why each example supports the rating
- If putting them up for promotion:
- Prior performance rating
- 2-3 examples that demonstrate their performance with the next level from their career ladder
- If they are not ready right now,how can we set them up for success in the future?
- Pro Tips:
- Utilizing the SBI model when writing reviews, will better prepare Managers to present tangible evidence to support their ratings during the calibration session
- When possible, Managers should support their tangible evidence by tying it back to the expectations outlined in the career framework
- While not required, we highly encourage Managers to write notes in the Calibration Roster in preparation for their calibration presentation (see example below).
- Each calibration presentation should be sufficiently detailed that peer Managers can make an informed judgment on the score with no first hand knowledge of the work beforehand.
- When not presenting, Managers are expected to participate in the calibration of every other Teammate presented, not just their direct reports and ICs they are directly familiar with. Participating means:
- Asking clarifying questions of the presenting Manager until you are able to make an informed determination on if the work presented supports the score given
- Openly stating if you agree or disagree with the score given, and why, to facilitate the discussion
- Participate in the discussion until the group is aligned on the score, and the reasoning why
Q: “You mentioned that the 6 components were particularly complex - what made these more complex than you would typically expect from an IC1 Wizard?”
A: The 6 components were lacewing flies, leeches, fluxweed, knotgrass, powdered horn of a Bicorn and shredded skin of a Boomslang. The lacewing flies had to be brewed for 21 days beforehand, the fluxweed had to be picked at exactly midnight, and for each step she had to calculate the brewing time to the minute based on if the pot was copper or bronze for all ingredients to work together. One minute of variance in that month and the potion would not work. That level of precision in execution exceeds what I expect from IC1 Wizards”
Q: “You mentioned that Hermione led the project, but my impression was that Luna did the customer discovery, wrote the scoping document, and was the one driving the project. Which parts of this was Hermione responsible for?”
A: “Luna was involved at the beginning of the project, but was reassigned to another project before Hermione joined. Hermione was the one who came up with using polyjuice potion and the execution from there - Luna did not rejoin the project.”
Q: In Hermione’s self review, she noted that when she took the potion it did not work as intended and she turned into a cat, so there was a problem in the original execution. Does this still exceed your expectations for an IC1?
A: Yes, Hermione made the right call to test the potion ahead of time, quickly root caused the issue, and fixed the problem before implementation. This still exceeds my expectations for an IC1 wizard.
- Managers will align on the score. Managers who disagree are expected to state their position and explain why, and participate in the debate to align on the score. Once aligned, we will move to the next Teammate on the calibration roster
Participating Manager 1:
So far, I have not heard anything that I believe exceeds the high bar for an IC1 Wizard. This potion is well defined in the potion book, and executing on well defined tasks is the expectation of all IC1’s on the team.
Participating Manager 2:
I disagree, the complexity of the execution in making the potion, even if spelled out step by step, exceeds expectations for an IC1 Wizard. In the career development framework, breaking down and scoping complex tasks is an IC2 expectation
Note: If Managers are unable to align, the VP of the division is responsible for clarifying the bar and making a decision.
- Following calibration discussions, we will review promotion nominations. Managers with promotion nominations will be asked to submit the promotion recommendation in Lattice. Please ensure the Impact Review packet and notes are submitted and provide detailed examples that support the promotion recommendation, as these will be shared with calibration attendees.
- During the calibration session, Managers will be asked to:
- Confirm they would still like to proceed with the promotion conversation, based on earlier performance/values calibration
- For individuals whose promotions we will move forward with, we will ask the group to raise concerns about promoting that individual
- The most senior leader in the meeting is responsible for getting the information they need from the group to make a final promotion decision.
Following calibration sessions, People Partners will work with VP+ level leaders to finalize compensation recommendations. Because we believe in consistently rewarding high performance, our merit increase process will tie increase percentages closely to performance/values ratings.
It’s crucial that we adhere as closely as possible to expected distributions because our merit increase budgets are based on distribution assumptions listed below. The purpose of calibration sessions are to hold a high bar for “Exceeding High Bar” scores, and reward teammates accordingly.
Likewise, final promotion decisions rest with VP+ level leaders. The calibration session is a time for VP+ level leaders to gather information they need to approve, or delay, promotion requests. VP+ level leaders are expected to follow up with Managers by the start of Phase 5 (see the Impact Review Timeline for details on phases) with final decisions, so Managers may communicate to their teams accordingly.
- At Sourcegraph, we strive to consistently reward and develop our team members fairly and equitably. As people leaders, it is our expectation that Managers calibrate teammate performance in a thoughtful manner and with an inclusive lens.
- These slides illustrate common pitfalls that may arise, consciously or unconsciously, during the performance appraisal process.
Based off the SEEDS model from the NeuroLeadership Institute
|Type of Bias||Examples (‘How it shows up’)||How to Mitigate|
We prefer what is like us over what is different
“People like me are better than others”
We prefer to act quickly rather than take time
“If it feels right, it must be true”
Other examples: Halo Effect, Confirmation Bias, Availability Bias
We take our perception to be the objective truth
“My perceptions are accurate”
We prefer what’s closer over what’s farther away
“Closer is better than distant”
We protect against loss more than we seek out gain
“Bad is stronger than good”