Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
Some examples we discussed too in the session with IE framework and challenges they have:
Eg 1: Conservation Finance
Credit Issuance Process
How it works
- Credits are derived from measurable metrics
- eg: a landowner hires a third-party evaluator to assess weather trees are standing
- the evaluator translates this data into credits that can be issued in a market as a reward function
Challenges:
1. Estimation Issues:
- estimation step is often inaccurate
2. Friction at every step
- assigning projects within scope involves significant friction
- outside digital domain, processes become even slower and more cumbersome
Eg 2: Scientific Publishing
How it works
- Journals act as IEs for knowledge
- Each publication venue carries a reputation score (eg: impact factor)
Incentive Loop
- Goal for scientists: accumulate reputation through publications
Problem/Challenge:
- The evaluators (journals and conferences) have been entrenched for over 100 years.
- Their reward loop has not been updated which creates rigidity, inefficieny, and misaligned incentives.

30.7.2025
In today's open session we ran an analysis on the application of impact evaluator or block reward type systems for 2 domains: academic publishing & environment
We derived 5 useful features in their design
1. All impact evaluator functions require credible conversion into fungibility
Hash power for btc, storage for fil, etc. are clear mathematical functions allowing for issuance against some formula
But people only buy into the issuance if they accept its neutrality. For eg, carbon credits are fungible but many coal polluters use some slightly better technology and receive credits so it's not entirely credible
2. If properly obtained, impact evaluator systems become knobs by which we can align long term actors around an ideal outcome we want
They should also be metrics that are hard to obtain but easy to verify, similar to btc or storage capacity
3. We want to ideally first solve some problem locally like "is this paper enough to be accepted to the conferences"
And make those inputs onto more global problems like "is the conference high impact", "how good is a researcher as measured by their publication in good conferences"
4. We want impact evaluators to be self upgrading systems, otherwise they can ossify into bastions of power
A good example is the implementation of plurality in community notes or cluster QF. If 2 people normally disagree but now agree, that has a higher weight. But if they again agree next time it has a lower weight since last time they voted together
5. Finally, we have impact evaluators as hard mathematical functions that release some emissions vs more soft & irrational forces like market prices of that currency, which need to be squared against each other
640
Johtavat
Rankkaus
Suosikit