In the last RnDAO Collabberry epoch, issues arose due to differing interpretations of whether someone had worked with a peer or not. Some team members felt it was unfair to be assessed by individuals who don’t work directly with them and, therefore, may not have the full picture of their contributions.
Context is not a binary concept where one either has it or does not; people have varying levels of context with their teammates. This variability can lead to feelings of unfairness, particularly when someone with very little context gives scores that negatively impact a peer's overall evaluation, despite not fully understanding the peer's work.
Introduce a self-stated level of context that each contributor declares about the peer they are assessing. This context weight will influence the final compensation formula, ensuring that evaluations are more reflective of actual understanding and familiarity with the peer's contributions.
The P2P assessment form now will include one more input for each contributor - Context Range.
Instead of using a simple average of the scores received by each contributor, calculate a Weighted Average Score that accounts for the context weight. This can be done by multiplying each performance score by its corresponding context weight and then dividing by the sum of the context weights.
$$ \text{WAS} = \frac{\sum (S_i \times W_i)}{\sum W_i} $$
Where:
The Salary Adjustment Multiplier (SAM) will now be calculated using the Weighted Average Score:
$$ SAM = (WAS - 3)* PAR $$