Profile of joanbp in Optimism
Posts by joanbp
-
Retro Funding 6: Self-dealing Policy
by joanbp - No Role
Posted on: Nov. 1, 2024, 7:14 a.m.
Content: I don’t know. Maybe someone else knows. :slightly_smiling_face:
Likes: 1
Replies: 0
No replies yet.
-
Retro Funding 6: Badgeholder Manual
by joanbp - No Role
Posted on: Nov. 1, 2024, 3:45 a.m.
Content:
pfedprog:
Regarding ranking, I am not sure if we can call it that way. By default application flow I am given only 1 project at a time to rate from very low to very high. Effectively from 1 to 5 .
It may help you do a rough first sorting. You can always adjust later in the process. Or you can rate all projects the same initially and do a completely customized allocation directly in the ballot.
The rating is only supposed to be a tool to help you; if you don’t think it’s helpful, you don’t have to use it. (I agree that this is not entirely intuitive, but it is my understanding from last round).
pfedprog:
Also, to double check, the conflict of interests option is not clear if a person that cheats faces any consequences.
You can read more about it here:
500 × 500 5 . 4 KB
Retro Funding 6 : Self-dealing Policy Policies and Templates ?
It is expected that Optimists refrain from self-dealing. While opportunities for self-dealing in the Citizens’ House will be reduced via incentive design and voting mechanisms, the Foundation may implement additional measure to discourage self-dealing, to be defined at the beginning of each Round, while these mechanisms are further developed. The Foundation has defined the below process to address self-dealing in Round 6 , if needed.
Conflict of Interest Disclosure
Voters should not vote for org…
Don’t cheat. :slightly_smiling_face:
pfedprog:
Finally, I am also perplexed by how am I supposed to rationally allocate the funding between three categories. When I am encouraged to only dive into one.
I agree. I think diving at least somewhat into all categories is necessary. You need to have at least some idea of the overall size and quality of all contributions to do a proper job at budgeting.
You may discuss with other voters and use their experience too - are you a guest voter? You guys have a telegram group of your own, right? Maybe try to come up with ideas for allocation methods, and then discuss with the others. (We are encouraged to not share voting strategies between the two groups of voters - citizens and guest voters - which I find sad, but that is the way the experiment is designed, so…)
pfedprog:
Next, I would definitely need somebody help me understand the work accomplished by one of the projects because the clear impact statement is not clearly highlighted. I have not found a guideline for rating such project.
Very curious if there is a point of contact in the application, I can reach out to ask additional questions.
I would start by posting my question in my voter telegram group. Someone else may know something. Or have ideas as to how to handle applications with unclear impact statements.
If you think there is a technical issue with the application, maybe tag Jonas. But if it’s just that the applicant is unclear, try instead to come up with what you think is a fair approach to that. (Some might decide that if there is no clear impact, then they allocate 0 ; others might do extra research and find that there is an impact that should be rewarded - different voters, different voting strategies).
Whatever you end up doing, I hope you share your voting strategy afterwards, here in the forum. There will probably be a dedicated thread for it. You ask good questions - I’m sure it will be interesting to read about your answers and/or remaining open questions by the end of the experiment!
Likes: 2
Replies: 0
No replies yet.
-
Retro Funding 6: Badgeholder Manual
by joanbp - No Role
Posted on: Oct. 31, 2024, 3:18 p.m.
Content:
500 × 500 5 . 4 KB system:
min per project: 1 , 000 OP
Minimum for the project to receive anything, right?
Likes: 3
Replies: 1
Replies:
- Jonas: Yes that’s correct @joanbp
-
Retro Funding 6: Bribery Policy
by joanbp - No Role
Posted on: Oct. 29, 2024, 4:08 p.m.
Content:
500 × 500 5 . 4 KB system:
The Citizens’ House will then have one week to veto any enforcement decisions proposed by the Foundation.
Same question as for the self-dealing rule:
Simple majority?
Likes: 0
Replies: 0
No likes yet.
No replies yet.
-
Retro Funding 6: Self-dealing Policy
by joanbp - No Role
Posted on: Oct. 29, 2024, 4:02 p.m.
Content:
500 × 500 5 . 4 KB system:
Before any enforcement decision is subject to Citizens’ House veto
For clarity:
By simple majority, or…?
Likes: 1
Replies: 0
No replies yet.
-
Joan’s RPGF5 Reflections
by joanbp - No Role
Posted on: Oct. 16, 2024, 3:04 a.m.
Content:
ccerv 1 :
Of course, the hard part is that there has to be some public proof of contribution.
Yeah. That’s a tough one.
On the one hand - not all kinds of contributions inherently come with any kind of public proof. Some contributions don’t even take place in public. That’s a fact of life.
On the other hand - we do need to be able to justify our decisions, and basing them on objective data is the least contentious way to do that.
I will give it some thought.
It feels to me like we should strive to make systems that can handle the complex reality; not reduce reality to what our systems can handle.
Maybe we can come up with ways of making objective data that speak to the things we can’t directly count and measure. Some kind of phenomenological approach, maybe.
In a way that’s central to what Impact Garden and Devouch do. We can’t always directly measure the usefulness of a project, but we can collect subjective statements from a lot of people and this then becomes our objective data: “These people said this or that under these conditions, and here is what we know about each of them…”
The github contributor count and similar objective data points taken from the public internet are a great place to start! But there must be ways to
create objective data from the subjective (or just not inherently public) parts of reality
and
make systems that can acknowledge the fact that important parts of life take place in ways that are not visible to the algorithm - but relevant to it. Possibly this is where human citizens will truly get a chance to make themselves indispensable. Because we see things that the algorithm doesn’t, and they matter to us. And if we are smart, we make the algorithm care about what matters to us.
I know. That was more philosophy than data science. But it’s just a starting point. Like the github contributor count. Your words made me think. I’ll think some more. :slightly_smiling_face:
Likes: 2
Replies: 0
No replies yet.
-
Retro Funding 5: Voting Rationale Thread
by joanbp - No Role
Posted on: Oct. 15, 2024, 1:49 a.m.
Content: Hi Catjam / Meg
Thanks for sharing your reflections!
catjam:
40 % Eth core contributions
30 % OP stack R&D
20 % OP stack tooling
I just have to ask…
Did you really vote like this?
Because then we just uncovered a bug in the UI… Surely it should enforce that the percentages add up to 100 %. :wink:
Likes: 0
Replies: 1
No likes yet.
Replies:
- catjam: Oh gosh… thank you for catching! That was a typo, I promise the UI enforced 100% Just edited with accurate allocations
-
Retro Funding 5: Voting Rationale Thread
by joanbp - No Role
Posted on: Oct. 13, 2024, 6:47 a.m.
Content: My voting rationale can be found here.
(Reflections on the review process are in the previous post, here.)
Likes: 3
Replies: 0
No replies yet.
-
Retro Funding 5: Voting Rationale Thread
by joanbp - No Role
Posted on: Oct. 13, 2024, 6:47 a.m.
Content: My voting rationale can be found here 32 .
(Reflections on the review process are in the previous post, here 2 .)
Likes: 3
Replies: 0
No replies yet.
-
Joan’s RPGF5 Reflections
by joanbp - No Role
Posted on: Oct. 13, 2024, 6:10 a.m.
Content: Voting Process
Voting format and my role as a voter
In RPGF 5 , the overall goal was to reward OP stack contributions, ie. the core technical infrastructure of Optimism and the Superchain, including its research and development.
Non-expert citizens were separated from expert citizens and guest voters. Experts were instructed to not interact with non-experts, so as to not mess with the experiment design.
I was in the group of non-experts.
Within each main group, three sub-groups were formed to address each of the three round categories: Ethereum Core Contributions, OP Stack R&D and OP Stack Tooling.
I was in the Ethereum Core Contributions sub-group.
As it happens, I had been assigned to the other two categories as a reviewer, so I got a nice overview of all applications which helped me during the voting process.
Voters were asked to vote on a) the total round budget, b) the splitting of that budget between the three categories, and c) the allocation of funds to projects within the category to which the voter was assigned.
Voting rationale
Total round budget
I voted for the maximum allowed round budget of 8 M OP.
While there were fewer eligible applications than expected, my impression is that the quality was high. Even in a larger pool, these applications would have probably attracted most of the funding (in RPGF 3 , the top 40 OP Stack projects received 6 . 5 M+ OP in funding).
Budget 1112 × 1052 171 KB
Especially for Ethereum Core Contributions and OP Stack R&D, we are looking at massive open source software projects with hundreds of github contributors each.
There are other contributors in the Optimism ecosystem who deserve retro funding, but I can think of noone that deserves it more than these developer teams. Without them there would very literally be no Superchain.
Thus, whereas I do have some doubts about the 10 M OP awarded for any kind of onchain activity in RPGF 4 , and the 10 M+ OP recently distributed in Airdrop 5 , - I believe that RPGF 5 aims to reward precisely the kind of public goods that retroactive public goods funding was originally invented to support.
What eventually made me settle on the maximum budget of 8 M OP was this comparison with RPGF 4 and Airdrop 5 . Let’s keep our big perspective glasses on:
RPGF was not designed to directly incentivize onchain activity or demand for blockspace or sequencer revenue, but rather to secure the public goods that are needed to create value for developers and users alike and thus, over time, support more and better (values aligned) onchain activity.
That’s the flywheel 1 we should be aiming for.
So. The Foundation had hoped or expected to see more RPGF 5 applications; let’s incentivize more such projects (and applications) in the future.
Category budget split
I voted to allocate 40 % of the total budget to Ethereum Core Contributions ( 30 projects), 45 % to OP Stack R&D ( 29 projects) and 15 % to OP Stack Tooling ( 20 projects).
The two first categories had more applications, and those projects were generally bigger, more substantial and had more github contributors as compared to those in the OP Stack Tooling category. They were also more consistently free to use. The budget should reflect all of that.
I gave some extra weight to OP Stack R&D based on the rationale that Ethereum contributors can apply for funding in the entire Ethereum ecosystem, but OP Stack R&D must be sustained by the Superchain.
Project allocations (Ethereum Core Contributions)
My process towards allocating funds within the category assigned to me was:
Read all applications
Group similar applications (programming languages, concensus and execution clients, major guilds/organizations/reseach groups, library implementations, etc.)
Consider the relative impact of these groups and the projects within them
After this, I used the voting UI to sort the projects into impact groups and manually adjusted the suggested allocations.
I used the metrics provided by Open Source Observer 5 (especially the github contributor count, star counts and the age of the projects), as well as some basic research of my own around market penetration and such for context.
I also made a principal decision to support diversity (of languages, implementations, etc.) by rewarding certain ‘smaller’ implementations (smaller market share, fewer contributors) equally with some of their larger ‘competition’. Diversity and alternatives will keep us alive in the long run.
I considered previous funding but decided to only use it to support my general understanding of the ‘scale’ of the projects. RPGF 5 is only meant to reward recent impact, so there should be no need to subtract funding given in RPGF 3 . The rules offered no guidance on how to handle applications that had also been rewarded in RPGF 4 using the same impact time scope as RPGF 5 . Besides, only few projects are careful to specifically point out their recent impact in the application, so it was hard to use this as a basis for nuanced allocation.
I would like to see future versions of the application questionaire require applicants to describe a) their overall impact AND b) their impact within the round’s specified time frame. And as mentioned in my previous post, the round eligibility rules should make it clear how reviewers and voters are expected to evaluate projects that have already received retro funding in other retro rounds with overlapping time scope. These improvements would help everyone better understand what impact voters should be rewarding.
Voting UX
Cohesion
The voting UI clearly improves from round to round.
In this round, I liked the more cohesive voting experience where the UI offered to take us step by step through the process of choosing a budget, scoring impact and finally allocating funds.
Flexibility
I enjoyed the flexibility of being able to go back and re-evaluate the budget after having studied the categories and projects more carefully. Similarly, there was nice flexibility in being able to pick a basic allocation method and then customize to your hearts content. And it was even possible to re-submit your ballot as a whole if you had a change of heart after having submitted it the first time.
I missed having that same flexibility in the impact scoring step; there was no link to take you back to the previous project, and no way to reset and go through the impact scoring process as a whole again. In theory you would only perform this step once, but when you work with a new UI, it is always preferable to be able to explore a bit and then go back and “do it right”. Conversely, it is never nice to be led forward by an interface that will not allow you to go back and see what just happened, or change a decision.
(As a side-note, being able to go back and possibly reset things also makes it easier to test things before the round as it allows you to reproduce and explore errors before reporting them).
Speaking of flexibility, I would also have liked to be able to skip the impact scoring step entirely and go directly to allocation using some custom method of my own.
Furthermore, I personally find it very difficult to think about impact in an absolute sense, as is necessary to score projects one by one without first going through all of them. I understand and appreciate that this design was a deliberate choice, but maybe in a similar round in the future there could be an alternative impact scoring option that presents an overview of projects in one list view, with a second column for setting the impact scores (potential conflict of interest-declarations could be in a third column, or an option in the impact score column). The project names in the list should link to the full project discriptions, to be opened in a separate tab.
I imagine it would be amazing for voters to be able to choose to assess projects one by one, or by looking at the category as a whole and comparing the relative impact. You might even allow people to go back and forth between the two processes/views and get the best of both worlds.
(Pairwise 1 already offers a third option of comparing two projects at a time and leaving it to the algorithm to keep track of things for you. Offering a choice between multiple methodologies is awesome. Being able to try out all of them, go back and forth and mix and match would be incredible!)
Metrics
I loved that this round experimented with providing the human voter with both objective/quantitative data (from Open Source Observer) and qualitative data (from Impact Garden 1 ). Another provider of qualitative testimonials is Devouch.
The qualitative data available is still too sparse to be really useful, but I’m sure that will change over time.
For me, this combination of relying on responsible, human, gracefully subjective and hopefully diverse and multidimensional decisions made on the basis of objective data, presented in a clear and helpful way, is the way to go.
In that sense, I think RPGF 5 was the best round we have had so far, and I hope to see lots and lots of incremental improvement in the future, continuing down that road.
One specific thing that I would love to see is applicants declaring an estimate of the number of people who have contributed to their project - or maybe the number of people who stand to receive a share of any retro rewards they might get? Obviously, rewards are free profit, and projects can do with it as they please (I like that), but it would be good context for voters. In this round, OSO kindly provided github stats, which definitely work as useful heuristics, but a project could have many more contributors than just the people who commit things to github. Some types of projects are not code projects at all. It would be very cool to know more about the human scale of the operations that funding goes towards.
Other notes
Discussion among badgeholders
I felt that there was a remarkable lack of debate among voters this time. The telegram channels were almost entirely silent. There was a kickoff call and one zoom call to discuss the budget - only a handful of voters participated in this. (Thanks to Nemo for taking the initiative!)
I don’t know if the silence may in part have been due to the official request to not discuss voting in public as it could ruin the experiment with guest voters?
In any case, I find it a shame. Surely, better decisions are made when people exchange ideas and learn from one another. And working together tends to be a pleasant way to spend time on a subject.
As for the guest voter experiment, I look forward to learning more when it has been evaluated! In future, I would love to see some experimentation with mixing experts and ‘regular’ citizens and encouraging discussions and learning on both sides.
Transparency
I like the balance struck by having public voting for the total budget and the category split, but private voting for the individual project allocations.
Time spent
The UI was nice and efficient. As mentioned, I did some reading and pre-processing of my own before using the UI, and there were the two zoom calls. And some time is needed afterwards for reflection and evaluation of the process (resulting among other things in this post).
In total, I may have spent about 10 hours on the voting process of RPGF 5 .
It is relevant to note that I had the benefit of already knowing the projects of the two other categories and having spent time on the eligibility criteria of the round during the review process. Without this, I would have needed more time for reading and researching prior to voting.
In future rounds, I would be happy to spend a bit more time on (sync/async) deliberations with other badgeholders.
Likes: 6
Replies: 2
Replies:
- catjam: as a badgeholder – thanks for these thoughtful reflections! i was surprised by the low volume of discussion as well, maybe a contributing factor is that the telegram groups were split in two?
I also contribute at gitcoin, which developed the voting UI, and I felt similarly about the workflow for going back and re-evaluating projects! good notes for future rounds.
- ccerv1: joanbp:
In this round, OSO kindly provided github stats, which definitely work as useful heuristics, but a project could have many more contributors than just the people who commit things to github.
Hey Joan, we have been experimenting with different variants of “contributor”. Of course, the hard part is that there has to be some public proof of contribution. When it comes to GitHub contributions, we have versions that look at code contributions as well as non-code contributions (eg, opening issues, commenting on issues, etc). We are also experimenting with different versions based on pattern of activity (eg, someone who comments on an issue once is not the same as someone who is responding to issues regularly). Personally, I loved the collaboration with OpenRank (see Create your own Developer Reputation Scores - OpenRank Web Tool Intro as a way of creating a pool of the most trusted developers. Curious for your thoughts on this and ideas for future iterations!