Project name: Prime Rating
Author name and contact info (Discord): salomé# 0632
I understand that I will be required to provide additional KYC information to the Optimism Foundation to receive this grant: Yes
L 2 recipient address: TBD
Grant category: Tooling (Public Goods)
Is this proposal applicable to a specific committee? No
Project description (please explain how your project works):
Prime Rating is building a platform to enable a permissionless review framework for evaluating fundamental quality and technical risks of web 3 projects. Our mission is to foster transparency within the DeFi ecosystem and beyond, by enabling community-driven research through a unique “rate- 2 -learn & earn” approach. Everything is fully open source and enables anyone with the right expertise to contribute, learn, level-up and earn rewards.
Through our methodology, we create in-depth assessments of protocols, which are displayed as simple letter ratings (from A+ to D). Our goal is to fast-track coordination within the web 3 ecosystem and facilitate decision-making for users, investors and builders.
The rating reports are created in a seasonal approach. Each season is a 5 - 6 week long contest, where participants get rewarded for successful submissions and win additional prizes, based on quality and other criteria.
The results of previous seasons can be seen on our app in the form of 180 + protocols that were reviewed and regularly updated. We currently have three categories live, i.e. DeFi, Metaverse and ReFi. We would be more than happy to increase coverage of our ratings to protocols building on Optimism.
Project links:
Website: Prime Rating
Twitter: https://twitter.com/Prime_Rating spinning off from https://twitter.com/PrimeDAO_
Discord/Discourse/Community: Discord
Please include all other relevant links below:
Blog: https://medium.com/primedao/tagged/prime-rating
How to become a Rater: Permissionless Rating - Prime Rating
Additional team member info (please link):
Salome: https://twitter.com/SalomeBernhart
Lavi: https://twitter.com/Lavi_ 54
Thomas: https://twitter.com/xm 3 van
Luuk: https://twitter.com/LuukDAO
Please link to any previous projects the team has meaningfully contributed to:
Our core team members have previously contributed to and led initiatives in several web 3 projects, such as Index Coop, Balancer, Idle Finance, Yield Guild (YGG), TE, TEC, Longtail Financial, Paladin, the DAOist, Kolektivo and others. In addition, we bring academic-grade research experience as well as crypto-native investment research skills to the table.
Our team members are also part of several web 3 builder communities, such as Kernel, Safary Club or Encode.
With regards to contributions to Prime Rating, some previous supporters of our events are 1 kx, Celo and MetaPortal. Moreover, we have a strong partnership with DeFi Safety for coverage of technical reviews.
One of our latest public research contributions was this research paper about legal structures for DAOs: Costs and Benefits: Thinking Through Legal Structures for DAOs — PrimeDAO
And this research about enabling collateral in DeFi lending:
Medium – 8 Jul 22
Enabling Collateral in DeFi Lending — Why Your Favorite Token Might Not be...
Authors: Lavi & Dabar 90
Reading time: 17 min read
Relevant usage metrics (TVL, transactions, volume, unique addresses, etc. Optimism metrics preferred; please link to public sources such as Dune Analytics, etc.):
180 + protocols evaluated in DeFi, Metaverse & ReFi
More than 350 unique fundamental and technical reports written (protocols can be reviewed more than once by multiple raters)
Over 50 unique aspects evaluated per protocol (we evaluate a protocol’s value proposition, tokenomics, team, governance, code quality, security, documentation, testing and more)
7 rating events with over 60 raters contributing (see on-chain reputation)
~ 3 k monthly views of our reports (with no marketing)
Competitors, peers, or similar projects (please link):
We are not aware of any direct competitors that do similar project deep-dives and token reviews like we do. And as far as we know there’s no competitor offering full ecosystem coverage of projects building on Optimism, but there are other projects that create ratings:
https://baserank.io/
List of Cryptoassets by Rating - Wikirating
Crypto Coins - Weiss Ratings
https://kryptview.com/
Blockchain, Crypto Data, News & Ratings - TokenInsight
However, most of these ratings are based on other criteria than fundamental and technical quality reports.
Is/will this project be open sourced?: Yes
Optimism native?: No
Date of deployment/expected deployment on Optimism: TBD - We expect to move our operational part to Optimism in October.
Ecosystem Value Proposition:
What is the problem statement this proposal hopes to solve for the Optimism ecosystem?
Prime Rating’s proposal is based on the idea that a comprehensive, unbiased review and assessment of protocols is a mandatory requirement in a blockchain for common goods. We see our products and services as a complementary public good, to help users navigate an extremely fast-moving explorative ecosystem where it’s very cumbersome to keep up with all the developments.
Our vision is to provide all Optimism users access to important information in an easy, professional-grade, and actionable format. Applying this strategy to all our products and services is how Prime Rating aims to reduce socioeconomic inequality.
We decided to reach out to Optimism because it stands out not only for its capacity to scale but also for its commitment to pursuing the vision of decentralized public goods.
Together, we believe that we can help foster a deeper culture of full transparency, increase safety, usability, and trust within Web 3 .
How does your proposal offer a value proposition solving the above problem?
Prime Rating creates much-needed transparency on quality and risk in DeFi. We aim to introduce a new Rating framework for Optimism, which will enable users to curate projects building within the Optimism ecosystem and sort them by quality. For the user, this means a powerful feature to better navigate around pitfalls and find the projects that actually have something to offer, according to their risk appetite and fit within the broader public goods ecosystem.
In the end, we will contribute to a free, improved, and more resilient experience that increases user retention on Optimism. Moreover, we believe that our value proposition can help Optimism as a whole. As we will generate insights on the health of projects building on Optimism, which is an indicator of its overall ecosystem health. This can also include the creation of a regular ecosystem-report, to highlight developments and uncover potential gaps.
At the same time, we aim to also foster income generation through our API and add-on services such as ratings on demand, custom research, and potentially advisory services. Imagine Rating as a potential research hub dedicated to the Optimism ecosystem, that can be leveraged for more than protocol deep-dives in the future.
Why will this solution be a source of growth for the Optimism ecosystem?
We believe the following key features will create sustainable sources of community growth, user growth and retention, and protocol growth:
Improved user experience on Optimism, by providing a curated project overview and enabling new features (e.g. sort the dApp overview by rating score, verified tick for protocols building on Optimism, inform on state of projects, etc.).
New opportunity for Optimism’s community and analysts to contribute towards a meaningful mission, improving the ecosystem and making it more resilient.
Unique, commons-oriented review framework for permissionless coverage of the full Optimism ecosystem, enabling easy and fast orientation for users, builders, and contributors.
Attractive rewards and prizes for all participants, attracting the best analysts (~ 75 % of the grant will directly be used to reward community contributions).
Foster full transparency about quality, risks, and impact for projects on Optimism. Thus improving partnership and coordination management between protocols building on Optimism.
There is a real problem of voter fatigue, it is hard to read proposals. To ensure you are an informed voter, Prime Rating provides a highly sophisticated TLDR with its rating scores.
Free learning effect for participants empowered via our review framework. This education for the OP community is an additional public good that comes with our events.
Has your project previously applied for an OP grant?: No
Number of OP tokens requested: 220 . 000
Did the project apply for or receive OP tokens through the Foundation Partner Fund?: No
If OP tokens were requested from the Foundation Partner Fund, what was the amount?: n/a
How much will your project match in co-incentives? (not required but recommended, when applicable): ~ 1 : 1 as in previous events, raters are awarded with OP and D 2 D tokens. In addition, raters receive a non-transferable experience and governance token called RXP and POAPs for participation and awards (see blog post from past event).
Proposal for token distribution:
How will the OP tokens be distributed? (please include % allocated to different initiatives such as user rewards/marketing/liquidity mining. Please also include a justification as to why each of these initiatives align with the problem statement this proposal is solving.)
~ 75 % of the funds are used to reward participants during the rating events of Optimism dapps. Upon successful submission, i.e. when reports passed governance, the raters are rewarded with 150 $ in OP + 200 D 2 D (reward can increase with higher levels). In addition, raters that submit the most reports or the best reports in terms of quality are awarded additional prizes.
All insights generated during these events will be freely accessible via the website and our API, that we’ll grant access to for Optimism-related information sites.
15 % will be used to create an Optimism specific framework that helps to evaluate its ecosystem. This will require partnerships with other protocols on Optimism to review the framework.
The remaining 10 % will be used to cover some operational efforts to run the rating events.
TLDR:
The OP tokens will be used to facilitate between 4 to 6 Rating events (e.g. 4 DeFi + 2 Metaverse contests) for a time period of approximately 9 - 12 months.
During these events, we will host 8 - 12 expert sessions (workshops or AMAs), to educate the community on fundamental analysis and risks in DeFi and Metaverse
Each event will be having a kick-off session, where we’ll explain all that is needed to participate in detail
We’ll set up specific communication channels to support the Optimism community and the raters, specifically to facilitate a great experience during the contests.
To promote the events and to attract the best talents, we’ll conduct regular social media, marketing campaigns and Twitter push.
In terms of marketing, we’ll of course also place the Optimism logo on our website
We’ll also host Twitter spaces to share insights, and if requested we’re more than happy to produce 2 - 3 research articles about overall findings and to condense the insights generated via the protocol deep-dives.
Over what period of time will the tokens be distributed for each initiative? Shorter timelines are preferable to longer timelines. Shorter timelines (on the order of weeks) allow teams to quickly demonstrate achievement of milestones, better facilitating additional grants via subsequent proposals.
Over a time period of 9 - 12 months depending on how many successful report submissions of fundamental rating reports from the community are received.
Please list the milestones/KPIs you expect to achieve for each initiative, considering how each relates to incentivizing sustainable usage and liquidity on Optimism. Please keep in mind that progress towards these milestones/KPIs should be trackable.
M 1 - Customise FA report template and adjust infrastructure to enable coverage of Optimism-based protocols
M 2 - Organise first rating event within 1 month of receiving the grant
M 3 - Ensure initial coverage of at least 30 - 35 protocols via the first and second event
M 4 - Grow the community branch dedicated to Optimism to at least 15 raters regularly engaging and continuously writing and updating protocol reviews
M 5 - Increase coverage to at least 50 protocols until end of Q 4 2022
M 6 - Have the newly created ratings shared via API integration with at least two information outlets dedicated to Optimism (this comes in addition to a real-time updated dashboard in our rating app).
M 7 - Create an ecosystem report, summarising the insights generated from the ratings
M 8 - Ensure updating and coverage increase over Q 1 & Q 2 of 2023 and updating of ecosystem overview report when new insights are gained.
It’s our goal to regularly report and update on progress made, by sharing them in this forum.
Why will incentivized users and liquidity on Optimism remain after incentives dry up?
Prime Rating enables users to navigate in a space without boundaries and room for exploration, we guide users to help do their own research before interacting with new dApps. Interacting with apps on Optimism should work flawlessly. Our platform will continue to be updated as a means of discovery for users looking to use applications on Optimism. In addition, the data collected and ratings that Prime Rating published will continue to exist on IPFS and be useful for users who seek to interact on Optimism.
In a space where most information is public and code open source, the value of data lies in its curation, sense-making and how you apply it in the right context. Currently, Prime Rating offers two services to fully sustain itself in the future. Specific Report on Demand (RoD) requests, general research requests, copywriting and our API allow us to open our rating data to an even wider audience.
Also, in the near future, we are interested in launching a framework to facilitate deeper synergistic relations, help in the evaluation of Governance proposals and gain voting power between the two communities.
The post is a grant proposal for the project "Prime Rating," which aims to create a platform for evaluating the fundamental quality and technical risks of web 3 projects. The platform fosters transparency in the DeFi ecosystem through community-driven research and rating contests. The project is open-source and seeks to provide users with an easy and professional-grade assessment of projects. Prime Rating aims to enhance user experience on Optimism, contribute to ecosystem health, and provide insights for decision-making. The proposal outlines the distribution of tokens, milestones, and how the project plans to incentivize sustainable usage and liquidity on Optimism.
Project name: Prime Rating Author name and contact info (Discord): salomé# 0632 I understand that …
Project name: Prime Rating Author name and contact info (Discord): salomé# 0632 I understand that I will be required to provide additional KYC information to the Optimism Foundation to receive this grant: Yes L 2 recipient address: TBD Grant category: Tooling (Public Goods) Is this proposal applicable to a specific committee? No Project description (please explain how your project works): Prime Rating 10 is building a platform to enable a permissionless review framework for evaluating fundamental quality and technical risks of web 3 projects. Our mission is to foster transparency within the DeFi ecosystem and beyond, by enabling community-driven research through a unique “rate- 2 -learn & earn” approach. Everything is fully open source and enables anyone with the right expertise to contribute, learn, level-up and earn rewards. Through our methodology, we create in-depth assessments of protocols, which are displayed as simple letter ratings (from A+ to D). Our goal is to fast-track coordination within the web 3 ecosystem and facilitate decision-making for users, investors and builders. The rating reports are created in a seasonal approach. Each season is a 5 - 6 week long contest, where participants get rewarded for successful submissions and win additional prizes, based on quality and other criteria. The results of previous seasons can be seen on our app 10 in the form of 180 + protocols that were reviewed and regularly updated. We currently have three categories live, i.e. DeFi, Metaverse and ReFi. We would be more than happy to increase coverage of our ratings to protocols building on Optimism. Project links: Website: Prime Rating 10 Twitter: https://twitter.com/Prime_Rating 3 spinning off from https://twitter.com/PrimeDAO_ 1 Discord/Discourse/Community: Discord Please include all other relevant links below: Blog: https://medium.com/primedao/tagged/prime-rating 1 How to become a Rater: Permissionless Rating - Prime Rating Additional team member info (please link): Salome: https://twitter.com/SalomeBernhart 1 Lavi: https://twitter.com/Lavi_ 54 Thomas: https://twitter.com/xm 3 van 1 Luuk: https://twitter.com/LuukDAO Please link to any previous projects the team has meaningfully contributed to: Our core team members have previously contributed to and led initiatives in several web 3 projects, such as Index Coop, Balancer, Idle Finance, Yield Guild (YGG), TE, TEC, Longtail Financial, Paladin, the DAOist, Kolektivo and others. In addition, we bring academic-grade research experience as well as crypto-native investment research skills to the table. Our team members are also part of several web 3 builder communities, such as Kernel, Safary Club or Encode. With regards to contributions to Prime Rating, some previous supporters of our events are 1 kx, Celo and MetaPortal 1 . Moreover, we have a strong partnership with DeFi Safety for coverage of technical reviews. One of our latest public research contributions was this research paper about legal structures for DAOs: Costs and Benefits: Thinking Through Legal Structures for DAOs — PrimeDAO 2 And this research about enabling collateral in DeFi lending: Medium – 8 Jul 22 Enabling Collateral in DeFi Lending — Why Your Favorite Token Might Not be... 4 Authors: Lavi & Dabar 90 Reading time: 17 min read Relevant usage metrics (TVL, transactions, volume, unique addresses, etc. Optimism metrics preferred; please link to public sources such as Dune Analytics, etc.): 180 + protocols evaluated in DeFi, Metaverse & ReFi More than 350 unique fundamental and technical reports written (protocols can be reviewed more than once by multiple raters) Over 50 unique aspects evaluated per protocol (we evaluate a protocol’s value proposition, tokenomics, team, governance, code quality, security, documentation, testing and more) 7 rating events with over 60 raters 5 contributing (see on-chain reputation 1 ) ~ 3 k monthly views of our reports (with no marketing) Competitors, peers, or similar projects (please link): We are not aware of any direct competitors that do similar project deep-dives and token reviews like we do. And as far as we know there’s no competitor offering full ecosystem coverage of projects building on Optimism, but there are other projects that create ratings: https://baserank.io/ 3 List of Cryptoassets by Rating - Wikirating 1 Crypto Coins - Weiss Ratings 2 https://kryptview.com/ 1 Blockchain, Crypto Data, News & Ratings - TokenInsight 2 However, most of these ratings are based on other criteria than fundamental and technical quality reports. Is/will this project be open sourced?: Yes Optimism native?: No Date of deployment/expected deployment on Optimism: TBD - We expect to move our operational part to Optimism in October. Ecosystem Value Proposition: What is the problem statement this proposal hopes to solve for the Optimism ecosystem? Prime Rating’s proposal is based on the idea that a comprehensive, unbiased review and assessment of protocols is a mandatory requirement in a blockchain for common goods. We see our products and services as a complementary public good, to help users navigate an extremely fast-moving explorative ecosystem where it’s very cumbersome to keep up with all the developments. Our vision is to provide all Optimism users access to important information in an easy, professional-grade, and actionable format. Applying this strategy to all our products and services is how Prime Rating aims to reduce socioeconomic inequality. We decided to reach out to Optimism because it stands out not only for its capacity to scale but also for its commitment to pursuing the vision of decentralized public goods. Together, we believe that we can help foster a deeper culture of full transparency, increase safety, usability, and trust within Web 3 . How does your proposal offer a value proposition solving the above problem? Prime Rating creates much-needed transparency on quality and risk in DeFi. We aim to introduce a new Rating framework for Optimism, which will enable users to curate projects building within the Optimism ecosystem and sort them by quality. For the user, this means a powerful feature to better navigate around pitfalls and find the projects that actually have something to offer, according to their risk appetite and fit within the broader public goods ecosystem. In the end, we will contribute to a free, improved, and more resilient experience that increases user retention on Optimism. Moreover, we believe that our value proposition can help Optimism as a whole. As we will generate insights on the health of projects building on Optimism, which is an indicator of its overall ecosystem health. This can also include the creation of a regular ecosystem-report, to highlight developments and uncover potential gaps. At the same time, we aim to also foster income generation through our API and add-on services such as ratings on demand, custom research, and potentially advisory services. Imagine Rating as a potential research hub dedicated to the Optimism ecosystem, that can be leveraged for more than protocol deep-dives in the future. Why will this solution be a source of growth for the Optimism ecosystem? We believe the following key features will create sustainable sources of community growth, user growth and retention, and protocol growth: Improved user experience on Optimism, by providing a curated project overview and enabling new features (e.g. sort the dApp overview 1 by rating score, verified tick for protocols building on Optimism, inform on state of projects, etc.). New opportunity for Optimism’s community and analysts to contribute towards a meaningful mission, improving the ecosystem and making it more resilient. Unique, commons-oriented review framework for permissionless coverage of the full Optimism ecosystem, enabling easy and fast orientation for users, builders, and contributors. Attractive rewards and prizes for all participants, attracting the best analysts (~ 75 % of the grant will directly be used to reward community contributions). Foster full transparency about quality, risks, and impact for projects on Optimism. Thus improving partnership and coordination management between protocols building on Optimism. There is a real problem of voter fatigue, it is hard to read proposals. To ensure you are an informed voter, Prime Rating provides a highly sophisticated TLDR with its rating scores. Free learning effect for participants empowered via our review framework. This education for the OP community is an additional public good that comes with our events. Has your project previously applied for an OP grant?: No Number of OP tokens requested: 220 . 000 Did the project apply for or receive OP tokens through the Foundation Partner Fund?: No If OP tokens were requested from the Foundation Partner Fund, what was the amount?: n/a How much will your project match in co-incentives? (not required but recommended, when applicable): ~ 1 : 1 as in previous events, raters are awarded with OP and D 2 D tokens. In addition, raters receive a non-transferable experience and governance token called RXP and POAPs for participation and awards (see blog post 1 from past event). Proposal for token distribution: How will the OP tokens be distributed? (please include % allocated to different initiatives such as user rewards/marketing/liquidity mining. Please also include a justification as to why each of these initiatives align with the problem statement this proposal is solving.) ~ 75 % of the funds are used to reward participants during the rating events of Optimism dapps 1 . Upon successful submission, i.e. when reports passed governance, the raters are rewarded with 150 $ in OP + 200 D 2 D (reward can increase with higher levels). In addition, raters that submit the most reports or the best reports in terms of quality are awarded additional prizes. All insights generated during these events will be freely accessible via the website 10 and our API, that we’ll grant access to for Optimism-related information sites. 15 % will be used to create an Optimism specific framework that helps to evaluate its ecosystem. This will require partnerships with other protocols on Optimism to review the framework. The remaining 10 % will be used to cover some operational efforts to run the rating events. TLDR: The OP tokens will be used to facilitate between 4 to 6 Rating events (e.g. 4 DeFi + 2 Metaverse contests) for a time period of approximately 9 - 12 months. During these events, we will host 8 - 12 expert sessions (workshops or AMAs), to educate the community on fundamental analysis and risks in DeFi and Metaverse Each event will be having a kick-off session, where we’ll explain all that is needed to participate in detail We’ll set up specific communication channels to support the Optimism community and the raters, specifically to facilitate a great experience during the contests. To promote the events and to attract the best talents, we’ll conduct regular social media, marketing campaigns and Twitter push. In terms of marketing, we’ll of course also place the Optimism logo on our website We’ll also host Twitter spaces to share insights, and if requested we’re more than happy to produce 2 - 3 research articles about overall findings and to condense the insights generated via the protocol deep-dives. Over what period of time will the tokens be distributed for each initiative? Shorter timelines are preferable to longer timelines. Shorter timelines (on the order of weeks) allow teams to quickly demonstrate achievement of milestones, better facilitating additional grants via subsequent proposals. Over a time period of 9 - 12 months depending on how many successful report submissions of fundamental rating reports from the community are received. Please list the milestones/KPIs you expect to achieve for each initiative, considering how each relates to incentivizing sustainable usage and liquidity on Optimism. Please keep in mind that progress towards these milestones/KPIs should be trackable. M 1 - Customise FA report template and adjust infrastructure to enable coverage of Optimism-based protocols M 2 - Organise first rating event within 1 month of receiving the grant M 3 - Ensure initial coverage of at least 30 - 35 protocols via the first and second event M 4 - Grow the community branch dedicated to Optimism to at least 15 raters regularly engaging and continuously writing and updating protocol reviews M 5 - Increase coverage to at least 50 protocols until end of Q 4 2022 M 6 - Have the newly created ratings shared via API integration with at least two information outlets dedicated to Optimism (this comes in addition to a real-time updated dashboard in our rating app). M 7 - Create an ecosystem report, summarising the insights generated from the ratings M 8 - Ensure updating and coverage increase over Q 1 & Q 2 of 2023 and updating of ecosystem overview report when new insights are gained. It’s our goal to regularly report and update on progress made, by sharing them in this forum. Why will incentivized users and liquidity on Optimism remain after incentives dry up? Prime Rating enables users to navigate in a space without boundaries and room for exploration, we guide users to help do their own research before interacting with new dApps. Interacting with apps on Optimism should work flawlessly. Our platform will continue to be updated as a means of discovery for users looking to use applications on Optimism. In addition, the data collected and ratings that Prime Rating published will continue to exist on IPFS and be useful for users who seek to interact on Optimism. In a space where most information is public and code open source, the value of data lies in its curation, sense-making and how you apply it in the right context. Currently, Prime Rating offers two services to fully sustain itself in the future. Specific Report on Demand (RoD) requests, general research requests, copywriting and our API allow us to open our rating data to an even wider audience. Also, in the near future, we are interested in launching a framework to facilitate deeper synergistic relations, help in the evaluation of Governance proposals and gain voting power between the two communities.
Hey! you can update your proposal to be evaluated by the new governance committees.
Update your pro…
Hey! you can update your proposal to be evaluated by the new governance committees.
Update your proposal with the new template:
Grant Proposal Template [OLD] ? Policies and Templates
To apply for a grant through the Grants Council, please go here
Phase 1 Proposal Template v 2
All Governance Fund grant proposals will be processed by a Grants Council in Season 3 . Please submit your application to the Grants Council here following the process outlined here.
ALL CYCLE 11 GRANT PROPOSALS SHOULD USE THE UPDATED TEMPLATE HERE.
Key information about Governance Fund grants
5 . 4 % of the total initial token supply ( 231 , 928 , 234 OP) will be distributed to Optimism projects and commu…
Hi hi @AxlVaz thanks for pinging me, I just updated the proposal :slight_smile:
Looking forward to …
Hi hi @AxlVaz thanks for pinging me, I just updated the proposal :slight_smile:
Looking forward to the feedback from the Governance committees!
Hey! you can update your proposal to be evaluated by the new governance committees. Update your pro…
Hey! you can update your proposal to be evaluated by the new governance committees. Update your proposal with the new template: Grant Proposal Template [OLD] ? Policies and Templates To apply for a grant through the Grants Council, please go here Phase 1 Proposal Template v 2 All Governance Fund grant proposals will be processed by a Grants Council in Season 3 . Please submit your application to the Grants Council here following the process outlined here. ALL CYCLE 11 GRANT PROPOSALS SHOULD USE THE UPDATED TEMPLATE HERE. Key information about Governance Fund grants 5 . 4 % of the total initial token supply ( 231 , 928 , 234 OP) will be distributed to Optimism projects and commu…
Hey! you can update your proposal to be evaluated by the new governance committees. Update your pro…
Hey! you can update your proposal to be evaluated by the new governance committees. Update your proposal with the new template: Grant Proposal Template ? Governance Phase 1 Proposal Template v 2 Hey folks! This thread will contain our drafted updates to various Governance-Fund-related language and the proposal template. We look forward to hearing your feedback as we enter the Reflection Period! This is an update to the “How to Create a Proposal” post. Key information about Governance Fund grants 5 . 4 % of the total initial token supply ( 231 , 928 , 234 OP) will be distributed to Optimism projects and communities via the Governance Fund. During Season 2 , proj…
Hi hi @AxlVaz thanks for pinging me, I just updated the proposal :slight_smile: Looking forward to …
Hi hi @AxlVaz thanks for pinging me, I just updated the proposal :slight_smile: Looking forward to the feedback from the Governance committees!
Excited to see this proposal go to the next phase!
The Optimism ecosystem is vibrant, and there is …
Excited to see this proposal go to the next phase!
The Optimism ecosystem is vibrant, and there is a lot of overlap in our shared focus on advancing public goods. I’m sure the Prime Rating process will help set a benchmark for Public Goods projects and any other project on Optimism and speed up coordination.
Look forward to contributing :red_circle::red_circle:
Excited to see this proposal go to the next phase! The Optimism ecosystem is vibrant, and there is …
Excited to see this proposal go to the next phase! The Optimism ecosystem is vibrant, and there is a lot of overlap in our shared focus on advancing public goods. I’m sure the Prime Rating process will help set a benchmark for Public Goods projects and any other project on Optimism and speed up coordination. Look forward to contributing :red_circle::red_circle:
Could you expand on your partnership with DeFi safety (more out of curiosity and the type of suppo…
Could you expand on your partnership with DeFi safety (more out of curiosity and the type of support they provide in technical reviews)?
What are the levels and increases in $OP token rewards?
Could you provide a more detailed breakdown of how you got the grant size? $ 215 k seems like a hefty amount.
What operational efforts will be covered in the 10 %? Op costs are fine, I just want to understand where its being directed.
How long does it normally take an average rater to complete a report too?
Could you expand on your partnership with DeFi safety (more out of curiosity and the type of suppo…
Could you expand on your partnership with DeFi safety (more out of curiosity and the type of support they provide in technical reviews)? What are the levels and increases in $OP token rewards? Could you provide a more detailed breakdown of how you got the grant size? $ 215 k seems like a hefty amount. What operational efforts will be covered in the 10 %? Op costs are fine, I just want to understand where its being directed. How long does it normally take an average rater to complete a report too?
Thanks @Bobbay_StableLab for your feedback and the great questions, let me try to answer them.
…
Thanks @Bobbay_StableLab for your feedback and the great questions, let me try to answer them.
Bobbay_StableLab:
Could you expand on your partnership with DeFi safety (more out of curiosity and the type of support they provide in technical reviews)?
Sure, DFS is a founding partner of PrimeRating, their technical reports have been part of our rating framework from day one. To this day they provide basically all technical reviews, i.e. their scores make up 50 % of the overall rating. In theory, the technical report is also open source and can be used to evaluate protocols. But in practice most raters are primarily familiar with the fundamental reports.
Bobbay_StableLab:
What are the levels and increases in $OP token rewards?
Starting level is at $ 150 in OP + 200 D 2 D for a successful report (means it passed governance vote). The higher a rater ranks, the more rewards can be unlocked, e.g. + 10 % for Graduates, + 20 % for Masters and up to 100 % for Legends (the full table is in our docs). We also reward the reviewers (min. Master level) with $ 100 in OP plus 100 D 2 D, they support all raters with a peer-review. So beginners profit from valuable feedback from more experienced analysts. It’s also important to note that we incentivize high-quality reports through prizes in the range of $ 1000 to $ 2000 in OP for the best reports and for the most submissions during a season. This is also matched in D 2 D tokens.
Bobbay_StableLab:
Could you provide a more detailed breakdown of how you got the grant size? $ 215 k seems like a hefty amount.
Sure, as mentioned we aim to use the grant to create deep-dives of protocols building on Optimism. From previous events that we organised, we know that the costs are between $ 30 - 40 k per season (depending on the amount of participants). Most of it is used to award participants. This typically allows for coverage of 30 - 40 protocols per season. With the current token-value of OP, this would enable us to conduct 5 - 6 seasons, which we’d hold over a time period of 9 - 12 months. Resulting in covering around 150 - 240 protocol ratings, the list of protocols rated can be co-curated by the OP ecosystem.
However, if the ask is deemed too high, we’re open to adjust and reduce the number of planned seasons to only 3 to 4 , which still enables coverage of many projects.
Bobbay_StableLab:
What operational efforts will be covered in the 10 %? Op costs are fine, I just want to understand where its being directed.
The operational costs are meant to cover costs related to organising the seasons and to facilitate the whole governance process. More specifically, this means for instance managing all communication and marketing before the event, facilitating all sessions during the season (e.g. kick-off call, expert / learning sessions, AMAs, and other support for participants), plus governance and the reward process after the event (e.g. RXP minting, POAPs & award for winners, reward payout, etc.). We have two team members who’d take care of this, however, community contributions can also be rewarded.
To be more precise, we anticipate $ 3 . 6 k to $ 4 . 4 k of operational costs per event. Which would add up to about $ 20 - 24 k over the period of one year (with 5 - 6 seasons).
Bobbay_StableLab:
How long does it normally take an average rater to complete a report too?
Great question and difficult to answer. It heavily depends on your experience, familiarity with the project and its complexity, plus availability of sources. A beginner might need 4 to 5 days to come up with a decent report and most likely needs revision during the feedback process, while an experienced rater can do it in 1 - 2 days. But for a high quality report I’d anticipate 2 - 4 days of full investigation mode also for highly experienced raters. Hope this helps!?
Thanks again for your questions, hope these answers bring more clarity, let us know if something is still unclear.
Bobbay_StableLab: Appreciate the detailed report! It helped a lot.
Prime Rating provides an interesting insight into the DeFi world, and this free information for readers is a great resource.
One final question - Once a protocol is reviewed, how long till you review it again?
Thanks @Bobbay_StableLab for your feedback and the great questions, let me try to answer them. …
Thanks @Bobbay_StableLab for your feedback and the great questions, let me try to answer them. Bobbay_StableLab: Could you expand on your partnership with DeFi safety (more out of curiosity and the type of support they provide in technical reviews)? Sure, DFS is a founding partner of PrimeRating, their technical reports have been part of our rating framework from day one. To this day they provide basically all technical reviews, i.e. their scores make up 50 % of the overall rating. In theory, the technical report is also open source and can be used to evaluate protocols. But in practice most raters are primarily familiar with the fundamental reports. Bobbay_StableLab: What are the levels and increases in $OP token rewards? Starting level is at $ 150 in OP + 200 D 2 D for a successful report (means it passed governance vote). The higher a rater ranks, the more rewards can be unlocked, e.g. + 10 % for Graduates, + 20 % for Masters and up to 100 % for Legends (the full table is in our docs). We also reward the reviewers (min. Master level) with $ 100 in OP plus 100 D 2 D, they support all raters with a peer-review. So beginners profit from valuable feedback from more experienced analysts. It’s also important to note that we incentivize high-quality reports through prizes in the range of $ 1000 to $ 2000 in OP for the best reports and for the most submissions during a season. This is also matched in D 2 D tokens. Bobbay_StableLab: Could you provide a more detailed breakdown of how you got the grant size? $ 215 k seems like a hefty amount. Sure, as mentioned we aim to use the grant to create deep-dives of protocols building on Optimism. From previous events that we organised, we know that the costs are between $ 30 - 40 k per season (depending on the amount of participants). Most of it is used to award participants. This typically allows for coverage of 30 - 40 protocols per season. With the current token-value of OP, this would enable us to conduct 5 - 6 seasons, which we’d hold over a time period of 9 - 12 months. Resulting in covering around 150 - 240 protocol ratings, the list of protocols rated can be co-curated by the OP ecosystem. However, if the ask is deemed too high, we’re open to adjust and reduce the number of planned seasons to only 3 to 4 , which still enables coverage of many projects. Bobbay_StableLab: What operational efforts will be covered in the 10 %? Op costs are fine, I just want to understand where its being directed. The operational costs are meant to cover costs related to organising the seasons and to facilitate the whole governance process. More specifically, this means for instance managing all communication and marketing before the event, facilitating all sessions during the season (e.g. kick-off call, expert / learning sessions, AMAs, and other support for participants), plus governance and the reward process after the event (e.g. RXP minting, POAPs & award for winners, reward payout, etc.). We have two team members who’d take care of this, however, community contributions can also be rewarded. To be more precise, we anticipate $ 3 . 6 k to $ 4 . 4 k of operational costs per event. Which would add up to about $ 20 - 24 k over the period of one year (with 5 - 6 seasons). Bobbay_StableLab: How long does it normally take an average rater to complete a report too? Great question and difficult to answer. It heavily depends on your experience, familiarity with the project and its complexity, plus availability of sources. A beginner might need 4 to 5 days to come up with a decent report and most likely needs revision during the feedback process, while an experienced rater can do it in 1 - 2 days. But for a high quality report I’d anticipate 2 - 4 days of full investigation mode also for highly experienced raters. Hope this helps!? Thanks again for your questions, hope these answers bring more clarity, let us know if something is still unclear.
Bobbay_StableLab: Appreciate the detailed report! It helped a lot.
Prime Rating provides an interesting insight into the DeFi world, and this free information for readers is a great resource.
One final question - Once a protocol is reviewed, how long till you review it again?
I remember reading this proposal first day it was shared here and I am still not sure how to feel a…
I remember reading this proposal first day it was shared here and I am still not sure how to feel about this.
DeFi is inherently risk and even after all this auditing, alpha, gamma and what not rating, hack is common. A simple conditional logic could lead to million is wrong hand.
Here, when we support this proposal we are supporting the individual rating the project. Who is rating their credentials ?
How is it possible that CREAM, a protocol hacked 3 times is sitting right below Convex? what am i missing here ? Shoulnt you mention such attack on first page of your report, in BOLD.
Salome: Hey @OPUser thanks a lot for your feedback, we hear you and hope we can clarify your concerns below.
OPUser:
DeFi is inherently risk and even after all this auditing, alpha, gamma and what not rating, hack is common. A simple conditional logic could lead to million is wrong hand.
We 100% agree that Web3 is still risky and a rating framework cannot prevent hacks, stable de-pegs, death spirals, or whatever comes next. However, we’re also convinced that there are ways to reduce risks, and that an open-source framework fostering aggregated research from a community of raters (crowd intelligence), can serve as a powerful tool, leading to better informed web3 participants. We will not be able to fully eliminate the pains mentioned above, but we can increase transparency and improve information flow and thus build a common ground for improved DYOR.
About the conditional logic: The majority of the technical evaluation template is based on conditional logic (i.e. Yes/No questions), but not all of them. The fundamental report, on the other hand, uses targeted but open questions in combination with a scoring table, allowing for some subjectivity from the author. In addition, multiple raters can evaluate the same project and we use the average score to prevent outsized impact by one single rater. In combination, more than 50 unique components are assessed per protocol, this should mitigate the problem you mentioned above (we’d love to have yourself for instance participate and influence the ratings), and it also prevents that the template can be gamed by projects. In addition, the more targeted the framework becomes to a specific use case, the better the information accuracy, which is why we believe a custom framework for Optimism specifically would be ideal. We’d love to involve the Optimism community to customise the template to the realm of OP. If the community finds historic hacks are the most important indicator, we can include it in the report template (via Snapshot vote). As of today, hacks are covered via the technical review by penalising projects with a lower score (see example).
OPUser:
Here, when we support this proposal we are supporting the individual rating the project. Who is rating their credentials ?
The community evaluates itself and credentials are gathered through contributions (we call it rating experience points => RXP). It’s a peer-review system, whereby analysts with higher experience evaluate the quality of work from others. In full web3 manner - a world of anons - we can’t rely on traditional credentials, but aim to use on-chain credentials. You submit reports using your wallet address, you level up and get rewarded for your work with 10 RXP per report / 5 RXP per review (kinda like Proof-of-Work). This unlocks new positions such as “reviewer” (you can review other reports) and essentially you can become a governor (with 200+ RXP you can vote on accepting/rejecting a new report). A full overview of the levels can be found here. In case of an issue (e.g. false information in a report), there is a dispute process to resolve it.
Let’s take a look at the examples you mentioned. Admittedly we discovered a data discrepancy between CREAM’s technical score on our site (previously 76%) and the score provided by DFS (61%). This is being adjusted to reflect the lower score, which reduced the overall rating as well. As mentioned above, hacks result in a penalty on the technical score. They’re highlighted on top of DFS’s review summary. We haven’t included them into our front end yet (we kept the UX rather lean and limited to the scores, for more detail the reports can be read), we’re open to frontend adjustments for the Optimism ecosystem rating dashboard!
dabar90: Agree with you, nothing can 100% protect our funds but imo fundamental reports are necessary because provide individual investor/user more details about specific protocol. Except bad written conditional logic, a lot of protocols also have unsustainable token-economics, failed PMF, poorly designed governance system, and many other problems that can also result in losing funds. Exploits are more publicly visible because represent “quick robbery” (by inside or outside acters) and in that situation for user is more important quality of community and governors structure (i.e. compare Compound vs Agave reaction after exploits).
OPUser:
Here, when we support this proposal we are supporting the individual rating the project. Who is rating their credentials ?
Its more than just put the scores on sections. Individual needs to perform extensive research and write report (its impossible in few days) but that need to go under review process and final version need to be accepted by governors (more active participants). Only necessary credentials are reports quality (real “proof of work”) because point is accessibility, no limitation. Agree?
OPUser:
How is it possible that CREAM, a protocol hacked 3 times is sitting right below Convex? what am i missing here ?
Here I agree with you, I find more similar cases and here is a problem with “report” as static content and I think some parts of report need to be more ferquently updated (metrics, protocol updates, significant integrations…).
You can judge my bias from both side - I write over 20 reports and over 90% of my funds are on Optimism. I think that Prime Rating and similar projects need to be more incentivized by base layers because:
Users are responsible for own funds and need to have more info about protocols they use
Participation in rating process is permissionless and give a community on chain level opportunity for more engagement and education.
I didnt find that any layer2 ecosystem have community-driven and public rating system for protocols that operate on top of it. Quality rating system (based on fundamental analysis) for layer2 protocol means a lot when it comes to reputation, accessibility and trust.
Reports are content, and that will be always fund by protocol. Its just question if community want produce content in this way? If participation and creating improvement proposals are permissionless, I dont see why not?
I remember reading this proposal first day it was shared here and I am still not sure how to feel a…
I remember reading this proposal first day it was shared here and I am still not sure how to feel about this. DeFi is inherently risk and even after all this auditing, alpha, gamma and what not rating, hack is common. A simple conditional logic could lead to million is wrong hand. Here, when we support this proposal we are supporting the individual rating the project. Who is rating their credentials ? How is it possible that CREAM, a protocol hacked 3 times is sitting right below Convex? what am i missing here ? Shoulnt you mention such attack on first page of your report, in BOLD.
Salome: Hey @OPUser thanks a lot for your feedback, we hear you and hope we can clarify your concerns below.
OPUser:
DeFi is inherently risk and even after all this auditing, alpha, gamma and what not rating, hack is common. A simple conditional logic could lead to million is wrong hand.
We 100% agree that Web3 is still risky and a rating framework cannot prevent hacks, stable de-pegs, death spirals, or whatever comes next. However, we’re also convinced that there are ways to reduce risks, and that an open-source framework fostering aggregated research from a community of raters (crowd intelligence), can serve as a powerful tool, leading to better informed web3 participants. We will not be able to fully eliminate the pains mentioned above, but we can increase transparency and improve information flow and thus build a common ground for improved DYOR.
About the conditional logic: The majority of the technical evaluation template is based on conditional logic (i.e. Yes/No questions), but not all of them. The fundamental report, on the other hand, uses targeted but open questions in combination with a scoring table, allowing for some subjectivity from the author. In addition, multiple raters can evaluate the same project and we use the average score to prevent outsized impact by one single rater. In combination, more than 50 unique components are assessed per protocol, this should mitigate the problem you mentioned above (we’d love to have yourself for instance participate and influence the ratings), and it also prevents that the template can be gamed by projects. In addition, the more targeted the framework becomes to a specific use case, the better the information accuracy, which is why we believe a custom framework for Optimism specifically would be ideal. We’d love to involve the Optimism community to customise the template to the realm of OP. If the community finds historic hacks are the most important indicator, we can include it in the report template (via Snapshot vote). As of today, hacks are covered via the technical review by penalising projects with a lower score (see example).
OPUser:
Here, when we support this proposal we are supporting the individual rating the project. Who is rating their credentials ?
The community evaluates itself and credentials are gathered through contributions (we call it rating experience points => RXP). It’s a peer-review system, whereby analysts with higher experience evaluate the quality of work from others. In full web3 manner - a world of anons - we can’t rely on traditional credentials, but aim to use on-chain credentials. You submit reports using your wallet address, you level up and get rewarded for your work with 10 RXP per report / 5 RXP per review (kinda like Proof-of-Work). This unlocks new positions such as “reviewer” (you can review other reports) and essentially you can become a governor (with 200+ RXP you can vote on accepting/rejecting a new report). A full overview of the levels can be found here. In case of an issue (e.g. false information in a report), there is a dispute process to resolve it.
Let’s take a look at the examples you mentioned. Admittedly we discovered a data discrepancy between CREAM’s technical score on our site (previously 76%) and the score provided by DFS (61%). This is being adjusted to reflect the lower score, which reduced the overall rating as well. As mentioned above, hacks result in a penalty on the technical score. They’re highlighted on top of DFS’s review summary. We haven’t included them into our front end yet (we kept the UX rather lean and limited to the scores, for more detail the reports can be read), we’re open to frontend adjustments for the Optimism ecosystem rating dashboard!
dabar90: Agree with you, nothing can 100% protect our funds but imo fundamental reports are necessary because provide individual investor/user more details about specific protocol. Except bad written conditional logic, a lot of protocols also have unsustainable token-economics, failed PMF, poorly designed governance system, and many other problems that can also result in losing funds. Exploits are more publicly visible because represent “quick robbery” (by inside or outside acters) and in that situation for user is more important quality of community and governors structure (i.e. compare Compound vs Agave reaction after exploits).
OPUser:
Here, when we support this proposal we are supporting the individual rating the project. Who is rating their credentials ?
Its more than just put the scores on sections. Individual needs to perform extensive research and write report (its impossible in few days) but that need to go under review process and final version need to be accepted by governors (more active participants). Only necessary credentials are reports quality (real “proof of work”) because point is accessibility, no limitation. Agree?
OPUser:
How is it possible that CREAM, a protocol hacked 3 times is sitting right below Convex? what am i missing here ?
Here I agree with you, I find more similar cases and here is a problem with “report” as static content and I think some parts of report need to be more ferquently updated (metrics, protocol updates, significant integrations…).
You can judge my bias from both side - I write over 20 reports and over 90% of my funds are on Optimism. I think that Prime Rating and similar projects need to be more incentivized by base layers because:
Users are responsible for own funds and need to have more info about protocols they use
Participation in rating process is permissionless and give a community on chain level opportunity for more engagement and education.
I didnt find that any layer2 ecosystem have community-driven and public rating system for protocols that operate on top of it. Quality rating system (based on fundamental analysis) for layer2 protocol means a lot when it comes to reputation, accessibility and trust.
Reports are content, and that will be always fund by protocol. Its just question if community want produce content in this way? If participation and creating improvement proposals are permissionless, I dont see why not?
Hey @OPUser thanks a lot for your feedback, we hear you and hope we can clarify your concerns below…
Hey @OPUser thanks a lot for your feedback, we hear you and hope we can clarify your concerns below.
OPUser:
DeFi is inherently risk and even after all this auditing, alpha, gamma and what not rating, hack is common. A simple conditional logic could lead to million is wrong hand.
We 100 % agree that Web 3 is still risky and a rating framework cannot prevent hacks, stable de-pegs, death spirals, or whatever comes next. However, we’re also convinced that there are ways to reduce risks, and that an open-source framework fostering aggregated research from a community of raters (crowd intelligence), can serve as a powerful tool, leading to better informed web 3 participants. We will not be able to fully eliminate the pains mentioned above, but we can increase transparency and improve information flow and thus build a common ground for improved DYOR.
About the conditional logic: The majority of the technical evaluation template is based on conditional logic (i.e. Yes/No questions), but not all of them. The fundamental report, on the other hand, uses targeted but open questions in combination with a scoring table, allowing for some subjectivity from the author. In addition, multiple raters can evaluate the same project and we use the average score to prevent outsized impact by one single rater. In combination, more than 50 unique components are assessed per protocol, this should mitigate the problem you mentioned above (we’d love to have yourself for instance participate and influence the ratings), and it also prevents that the template can be gamed by projects. In addition, the more targeted the framework becomes to a specific use case, the better the information accuracy, which is why we believe a custom framework for Optimism specifically would be ideal. We’d love to involve the Optimism community to customise the template to the realm of OP. If the community finds historic hacks are the most important indicator, we can include it in the report template (via Snapshot vote). As of today, hacks are covered via the technical review by penalising projects with a lower score (see example).
OPUser:
Here, when we support this proposal we are supporting the individual rating the project. Who is rating their credentials ?
The community evaluates itself and credentials are gathered through contributions (we call it rating experience points => RXP). It’s a peer-review system, whereby analysts with higher experience evaluate the quality of work from others. In full web 3 manner - a world of anons - we can’t rely on traditional credentials, but aim to use on-chain credentials. You submit reports using your wallet address, you level up and get rewarded for your work with 10 RXP per report / 5 RXP per review (kinda like Proof-of-Work). This unlocks new positions such as “reviewer” (you can review other reports) and essentially you can become a governor (with 200 + RXP you can vote on accepting/rejecting a new report). A full overview of the levels can be found here. In case of an issue (e.g. false information in a report), there is a dispute process to resolve it.
Let’s take a look at the examples you mentioned. Admittedly we discovered a data discrepancy between CREAM’s technical score on our site (previously 76 %) and the score provided by DFS ( 61 %). This is being adjusted to reflect the lower score, which reduced the overall rating as well. As mentioned above, hacks result in a penalty on the technical score. They’re highlighted on top of DFS’s review summary. We haven’t included them into our front end yet (we kept the UX rather lean and limited to the scores, for more detail the reports can be read), we’re open to frontend adjustments for the Optimism ecosystem rating dashboard!
Hey @OPUser thanks a lot for your feedback, we hear you and hope we can clarify your concerns below…
Hey @OPUser thanks a lot for your feedback, we hear you and hope we can clarify your concerns below. OPUser: DeFi is inherently risk and even after all this auditing, alpha, gamma and what not rating, hack is common. A simple conditional logic could lead to million is wrong hand. We 100 % agree that Web 3 is still risky and a rating framework cannot prevent hacks, stable de-pegs, death spirals, or whatever comes next. However, we’re also convinced that there are ways to reduce risks, and that an open-source framework fostering aggregated research from a community of raters (crowd intelligence), can serve as a powerful tool, leading to better informed web 3 participants. We will not be able to fully eliminate the pains mentioned above, but we can increase transparency and improve information flow and thus build a common ground for improved DYOR. About the conditional logic: The majority of the technical evaluation template is based on conditional logic (i.e. Yes/No questions), but not all of them. The fundamental report, on the other hand, uses targeted but open questions in combination with a scoring table, allowing for some subjectivity from the author. In addition, multiple raters can evaluate the same project and we use the average score to prevent outsized impact by one single rater. In combination, more than 50 unique components are assessed per protocol, this should mitigate the problem you mentioned above (we’d love to have yourself for instance participate and influence the ratings), and it also prevents that the template can be gamed by projects. In addition, the more targeted the framework becomes to a specific use case, the better the information accuracy, which is why we believe a custom framework for Optimism specifically would be ideal. We’d love to involve the Optimism community to customise the template to the realm of OP. If the community finds historic hacks are the most important indicator, we can include it in the report template (via Snapshot vote). As of today, hacks are covered via the technical review by penalising projects with a lower score (see example). OPUser: Here, when we support this proposal we are supporting the individual rating the project. Who is rating their credentials ? The community evaluates itself and credentials are gathered through contributions (we call it rating experience points => RXP). It’s a peer-review system, whereby analysts with higher experience evaluate the quality of work from others. In full web 3 manner - a world of anons - we can’t rely on traditional credentials, but aim to use on-chain credentials. You submit reports using your wallet address, you level up and get rewarded for your work with 10 RXP per report / 5 RXP per review (kinda like Proof-of-Work). This unlocks new positions such as “reviewer” (you can review other reports) and essentially you can become a governor (with 200 + RXP you can vote on accepting/rejecting a new report). A full overview of the levels can be found here. In case of an issue (e.g. false information in a report), there is a dispute process to resolve it. Let’s take a look at the examples you mentioned. Admittedly we discovered a data discrepancy between CREAM’s technical score on our site (previously 76 %) and the score provided by DFS ( 61 %). This is being adjusted to reflect the lower score, which reduced the overall rating as well. As mentioned above, hacks result in a penalty on the technical score. They’re highlighted on top of DFS’s review summary. We haven’t included them into our front end yet (we kept the UX rather lean and limited to the scores, for more detail the reports can be read), we’re open to frontend adjustments for the Optimism ecosystem rating dashboard!
Appreciate the detailed report! It helped a lot.
Prime Rating provides an interesting insight into …
Appreciate the detailed report! It helped a lot.
Prime Rating provides an interesting insight into the DeFi world, and this free information for readers is a great resource.
One final question - Once a protocol is reviewed, how long till you review it again?
On average every 3 - 6 months. There are two ways updates can happen:
A rater updates their orig…
On average every 3 - 6 months. There are two ways updates can happen:
A rater updates their original report via an update proposal, this can happen every quarter.
A new report gets written by another rater, via the normal report submission process.
Submissions of new reports can also happen much faster, during a season, because we allow three unique reports per protocol. So for instance Aave can be reviewed by three raters within a 5 - 6 week rating season.
Appreciate the detailed report! It helped a lot. Prime Rating provides an interesting insight into …
Appreciate the detailed report! It helped a lot. Prime Rating provides an interesting insight into the DeFi world, and this free information for readers is a great resource. One final question - Once a protocol is reviewed, how long till you review it again?
On average every 3 - 6 months. There are two ways updates can happen: A rater updates their orig…
On average every 3 - 6 months. There are two ways updates can happen: A rater updates their original report via an update proposal, this can happen every quarter. A new report gets written by another rater, via the normal report submission process. Submissions of new reports can also happen much faster, during a season, because we allow three unique reports per protocol. So for instance Aave can be reviewed by three raters within a 5 - 6 week rating season.
I am in support of this proposal, as someone who has found the very transparent and in-depth rating…
I am in support of this proposal, as someone who has found the very transparent and in-depth ratings provided by Prime Rating extremely useful while doing my own due diligence on the vast number of DeFi and Metaverse protocols that we have before us today.
Prime Rating has set a framework in place that allows both users and investors to make informed decisions, based on a number of important factors such as tokenomics, team and sustainability of a protocol. A number of protocols go to great lengths to hide and downplay certain aspects or shortcomings of their operations, whether being overly centralized, non-existent governance, illiquid tokens, clarity around regulatory compliance etc. Prime Rating puts this information front and center for anyone interested in a transparent and easily accessible manner.
The Optimism ecosystem is growing day by day, with over 200 apps currently live, and many more sure to come, Prime Rating will be an invaluable resource in the Optimism ecosystem to those (sometimes very naive) users seeking to assess the quality and risk of decentralized finance protocols.
I am in support of this proposal, as someone who has found the very transparent and in-depth rating…
I am in support of this proposal, as someone who has found the very transparent and in-depth ratings provided by Prime Rating extremely useful while doing my own due diligence on the vast number of DeFi and Metaverse protocols that we have before us today. Prime Rating has set a framework in place that allows both users and investors to make informed decisions, based on a number of important factors such as tokenomics, team and sustainability of a protocol. A number of protocols go to great lengths to hide and downplay certain aspects or shortcomings of their operations, whether being overly centralized, non-existent governance, illiquid tokens, clarity around regulatory compliance etc. Prime Rating puts this information front and center for anyone interested in a transparent and easily accessible manner. The Optimism ecosystem is growing day by day, with over 200 apps currently live, and many more sure to come, Prime Rating will be an invaluable resource in the Optimism ecosystem to those (sometimes very naive) users seeking to assess the quality and risk of decentralized finance protocols.
Agree with you, nothing can 100 % protect our funds but imo fundamental reports are necessary beca…
Agree with you, nothing can 100 % protect our funds but imo fundamental reports are necessary because provide individual investor/user more details about specific protocol. Except bad written conditional logic, a lot of protocols also have unsustainable token-economics, failed PMF, poorly designed governance system, and many other problems that can also result in losing funds. Exploits are more publicly visible because represent “quick robbery” (by inside or outside acters) and in that situation for user is more important quality of community and governors structure (i.e. compare Compound vs Agave reaction after exploits).
OPUser:
Here, when we support this proposal we are supporting the individual rating the project. Who is rating their credentials ?
Its more than just put the scores on sections. Individual needs to perform extensive research and write report (its impossible in few days) but that need to go under review process and final version need to be accepted by governors (more active participants). Only necessary credentials are reports quality (real “proof of work”) because point is accessibility, no limitation. Agree?
OPUser:
How is it possible that CREAM, a protocol hacked 3 times is sitting right below Convex? what am i missing here ?
Here I agree with you, I find more similar cases and here is a problem with “report” as static content and I think some parts of report need to be more ferquently updated (metrics, protocol updates, significant integrations…).
You can judge my bias from both side - I write over 20 reports and over 90 % of my funds are on Optimism. I think that Prime Rating and similar projects need to be more incentivized by base layers because:
Users are responsible for own funds and need to have more info about protocols they use
Participation in rating process is permissionless and give a community on chain level opportunity for more engagement and education.
I didnt find that any layer 2 ecosystem have community-driven and public rating system for protocols that operate on top of it. Quality rating system (based on fundamental analysis) for layer 2 protocol means a lot when it comes to reputation, accessibility and trust.
Reports are content, and that will be always fund by protocol. Its just question if community want produce content in this way? If participation and creating improvement proposals are permissionless, I dont see why not?
Agree with you, nothing can 100 % protect our funds but imo fundamental reports are necessary beca…
Agree with you, nothing can 100 % protect our funds but imo fundamental reports are necessary because provide individual investor/user more details about specific protocol. Except bad written conditional logic, a lot of protocols also have unsustainable token-economics, failed PMF, poorly designed governance system, and many other problems that can also result in losing funds. Exploits are more publicly visible because represent “quick robbery” (by inside or outside acters) and in that situation for user is more important quality of community and governors structure (i.e. compare Compound vs Agave reaction after exploits). OPUser: Here, when we support this proposal we are supporting the individual rating the project. Who is rating their credentials ? Its more than just put the scores on sections. Individual needs to perform extensive research and write report (its impossible in few days) but that need to go under review process and final version need to be accepted by governors (more active participants). Only necessary credentials are reports quality (real “proof of work”) because point is accessibility, no limitation. Agree? OPUser: How is it possible that CREAM, a protocol hacked 3 times is sitting right below Convex? what am i missing here ? Here I agree with you, I find more similar cases and here is a problem with “report” as static content and I think some parts of report need to be more ferquently updated (metrics, protocol updates, significant integrations…). You can judge my bias from both side - I write over 20 reports and over 90 % of my funds are on Optimism. I think that Prime Rating and similar projects need to be more incentivized by base layers because: Users are responsible for own funds and need to have more info about protocols they use Participation in rating process is permissionless and give a community on chain level opportunity for more engagement and education. I didnt find that any layer 2 ecosystem have community-driven and public rating system for protocols that operate on top of it. Quality rating system (based on fundamental analysis) for layer 2 protocol means a lot when it comes to reputation, accessibility and trust. Reports are content, and that will be always fund by protocol. Its just question if community want produce content in this way? If participation and creating improvement proposals are permissionless, I dont see why not?