Original:
Map the different stages of community-led decision-making Map the key processes at each governance stage and recommended approaches (community process) for each understanding the diversity of stakeholders involved Map how the different types of reputation overlap with said stages and how they can be strategically applied
Status Update
The first workshop was held on DD/MM and produced both a refined process map and a list of key pain points. The map looks like this (if the images don’t render well, follow this link):
The key pain points are:
- Too many proposals to review
- some really were called eligible on a technicality
- having to review things I am not expert in
- The reviews are ignored
- The community and expert reviews are supposed to help voters, but the voting isn't constrained by the ratings
- Too hard to evaluate
- It takes a lot of knowledge and work to evaluate
- Proposal milestones are too vague, pressure to "let them through"
- Visible farming
- Vote farming - marketing spam to "vote for me", sometimes giving new wallets AGIX
- Grant farming - Some projects are there just to get a quick first milestone payment, then they disappear
The second step was to select and prioritize the different types of reputation needed to solve these pain points. Our second workshop was held 17 July, but lightly attended. The work on collecting reputation types and their applications should continue asynchronously, but the concepts we collected certainly covers us the highest-priority. We will begin crafting our design document with these 3-4 top requirements, and we can learn from their use and invent others.
The map can be found here and looks like this:
The highest priority types of reputation are:
- Reliability
- In work (did the proposals they lead or were team members on have good outcomes?)
- In reviewing (were proposals they rated highly successful, those they rated low but were funded unsuccessful). This is high risk, because it is dependent on a high quality approval process to have data that is not biased by proposal popularity
- In evaluating (do they evaluate well, so their ratings on deliverable clarity can be given high weight)
- Expertise
- does the community believe this person is knowledgable about a domain or skill?
- Context
- Is this person actively collaborating with multiple parts of the community?
Two new topics were interesting and could be used for a simple filter early in the process:
- In good standing
- Has this person lead a team that failed to complete committed work?