Distroid Issue 39
A newsletter for curated findings, actionable knowledge, and noteworthy developments from the forefront of innovation, governance, research, and technology (i.e., the frontier).
Introduction
Welcome to this week’s edition of Distroid, a newsletter for curated findings, actionable knowledge, and noteworthy developments from the forefront of innovation, governance, research, and technology (i.e., the frontier).
In this issue:
Digest
Research
LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Towards Measuring the Representation of Subjective
Global Opinions in Language Models
Confronting Tech Power
When Bidders Are DAOs
Liquidity-Saving through Obligation-Clearing and Mutual Credit: An Effective Monetary Innovation for SMEs in Times of Crisis
Co-operatives, Work, and the Digital Economy: A Knowledge Synthesis Report
Books
Cooperatives at Work
News
The minimal definition of user agency
The Shared Sequencer
From Linked Lists to Namespace Merkle Trees
No. Hyperstructures cannot be valuable-to-own and free forever
A Call for More Programmable Retro-Funding for Digital Public Goods
The Japanese government positions 「 Zebra company 」 in its national strategy ― What is the background and what will happen in the future?
Improving the Hacker News Ranking Algorithm
Unlocking The Power of Onchain Libraries
The Revolution May Not Be Televised
Community Unchained
Why Does Decentralized Media Matter?
App-Specific Rollups: A Trade-Off Between Connectivity and Control
What Was Digital Media?
Tools for thought in your OODA loop
ON #174: Bridges 🌉
Why is Quadratic Funding so Hard?
Valuing Impact Certificates
The Mechanics of Index Payments
How not to solve governance
Criteria for Governance
Index Wallets
THE KEY TO NATURE’S INTELLIGENCE IS ARTIFICIAL INTELLIGENCE
Coordi-nations: A New Institutional Structure for Global Cooperation
Regulating AI in the UK: three tests for the Government’s plans
You're writing require statements wrong
Why DeFi is Broken and How to Fix It, Pt 1: Oracle-Free Protocols
AI, THE COMMONS, AND THE LIMITS OF COPYRIGHT
Goodbye & Final Reflections 👋
Tools
DAO Diplomat
everyname
Chainverse API
ChatData
Quests
Events
Autonomous Ecologies
Accelerating Worker Ownership: A Strategy Session on Co-operative Development in the Digital Economy and Beyond
Videos & Podcasts
Blockchain Radicals: How Capitalism Ruined Crypto and How to Fix It | Book Talk
Collaborative Finance: Credit clearing for collectively doing more with less capital
Health X Change | FIAT LUX Podcast #7
Mini Series - DeJourno Ep. 04 by Eureka John | Web3 Incentivizing Journalism as a Public Good
Tweets
Digest
You can find the more metadata on the content below on the Distroid Explorer. Go to the Advanced Search page, and set the Issue filter to 39.
Research
LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However, existing methods are difficult to reproduce or build on, due to private code, data, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. This paper removes these barriers by introducing LeanDojo: an open-source Lean playground consisting of toolkits, data, models, and benchmarks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs, providing valuable data for premise selection: a key bottleneck in theorem proving. Using this data, we develop ReProver (Retrieval-Augmented Prover): the first LLM-based prover that is augmented with retrieval for selecting premises from a vast math library. It is inexpensive and needs only one GPU week of training. Our retriever leverages LeanDojo's program analysis capability to identify accessible premises and hard negative examples, which makes retrieval much more effective. Furthermore, we construct a new benchmark consisting of 96,962 theorems and proofs extracted from Lean's math library. It features challenging data split requiring the prover to generalize to theorems relying on novel premises that are never used in training. We use this benchmark for training and evaluation, and experimental results demonstrate the effectiveness of ReProver over non-retrieval baselines and GPT-4. We thus provide the first set of open-source LLM-based theorem provers without any proprietary datasets and release it under a permissive MIT license to facilitate further research.
Towards Measuring the Representation of Subjective Global Opinions in Language Models
Large language models (LLMs) may not equitably represent diverse global perspectives on societal issues. In this paper, we develop a quantitative framework to evaluate whose opinions model-generated responses are more similar to. We first build a dataset, GlobalOpinionQA, comprised of questions and answers from cross-national surveys designed to capture diverse opinions on global issues across different countries. Next, we define a metric that quantifies the similarity between LLM-generated survey responses and human responses, conditioned on country. With our framework, we run three experiments on an LLM trained to be helpful, honest, and harmless with Constitutional AI. By default, LLM responses tend to be more similar to the opinions of certain populations, such as those from the USA, and some European and South American countries, highlighting the potential for biases. When we prompt the model to consider a particular country’s perspective, responses shift to be more similar to the opinions of the prompted populations, but can reflect harmful cultural stereotypes. When we translate GlobalOpinionQA questions to a target language, the model’s responses do not necessarily become the most similar to the opinions of speakers of those languages. We release our dataset for others to use and build on.2 We also provide an interactive visualization at https://llmglobalvalues.anthropic.com.
Confronting Tech Power
This report highlights a set of approaches that, in concert, will collectively enable us to confront tech power. Some of these are bold policy reforms that underscore the need for bright-line rules and structural curbs. Others identify popular policy responses that, because they fail to meaningfully address power discrepancies, should be abandoned. Several aren’t in the traditional domain of policy at all, but acknowledge the importance of nonregulatory interventions such as collective action, worker organizing, and the role public policy can play in bolstering these efforts. We intend this report to provide strategic guidance to inform the work ahead of us, taking a bird’s eye view of the many levers we can use to shape the future trajectory of AI – and the tech industry behind it – to ensure that it is the public, not industry, that this technology serves.
When Bidders Are DAOs
In a typical decentralized autonomous organization (DAO), people organize themselves into a group that is programmatically managed. DAOs can act as bidders in auctions (with ConstitutionDAO being one notable example), with a DAO’s bid typically treated by the auctioneer as if it had been submitted by an individual, without regard to any details of the internal DAO dynamics. The goal of this paper is to study auctions in which the bidders are DAOs. More precisely, we consider the design of two-level auctions in which the ”participants” are groups of bidders rather than individuals. Bidders form DAOs to pool resources, but must then also negotiate the terms by which the DAO’s winnings are shared. We model the outcome of a DAO’s negotiations through an aggregation function (which aggregates DAO members’ bids into a single group bid) and a budget-balanced costsharing mechanism (that determines DAO members’ access to the DAO’s allocation and distributes the aggregate payment demanded from the DAO to its members). DAOs’ bids are processed by a direct-revelation mechanism that has no knowledge of the DAO structure (and thus treats each DAO as an individual). Within this framework, we pursue two-level mechanisms that are incentive-compatible (with truthful bidding a dominant strategy for each member of each DAO) and approximately welfare-optimal. We prove that, even in the case of a single-item auction, the DAO dynamics hidden from the outer mechanism preclude incentive-compatible welfare maximization: No matter what the outer mechanism and the cost-sharing mechanisms used by DAOs, the welfare of the resulting two-level mechanism can be a ≈ ln n factor less than the optimal welfare (in the worst case over DAOs and valuation profiles). We complement this lower bound with a natural two-level mechanism that achieves a matching approximate welfare guarantee. This upper bound also extends to multi-item auctions in which individuals have additive valuations. Finally, we show that our positive results cannot be extended much further: Even in multi-item settings in which bidders have unitdemand valuations, truthful two-level mechanisms form a highly restricted class and as a consequence cannot guarantee any non-trivial approximation of the maximum social welfare.
Liquidity-Saving through Obligation-Clearing and Mutual Credit: An Effective Monetary Innovation for SMEs in Times of Crisis
During financial crises, liquidity tends to become scarce, a problem that disproportionately affects small companies. This paper shows that obligation-clearing is a very effective liquidity-saving method for providing relief in the trade credit market and, therefore, on the supply-side or productive part of the economy. The paper also demonstrates that when used in conjunction with a complementary currency system such as mutual credit as a liquidity source the effectiveness of obligation-clearing can be doubled. Real data from the Sardex mutual credit system show a reduction of net internal debt of the obligation network of approximately 25% when obligation-clearing is used by itself and of 50% when it is used together with mutual credit. These instruments are also relevant from the point of view of risk mitigation for lenders, based in part on the information on individual companies that the mutual credit circuit manager can provide to banks (upon the circuit member’s request) and in part on the relief that liquidity-saving provides especially to NPL companies. The paper concludes by outlining recommendations for how even greater savings could be achieved by including the tax authority as another node in the obligation network.
Co-operatives, Work, and the Digital Economy: A Knowledge Synthesis Report
The idea of platform cooperativism has raised the profile of co-operatives within tech communities and heightened the interest in digital technology within the co-operative movement. In this report, we survey recent literature on the formation of co-operatives as a strategy to improve work and livelihoods in the digital economy.
Co-operatives, Work, and the Digital Economy is guided by questions such as: What groups of workers have turned to the co-operative model in the digital economy? Do co-operatives have the capacity to mitigate precarity, deepen worker engagement, and combat inequality in the gig economy, tech sector, and digital creative industries? If co-ops are a promising means to improve livelihoods and democratize work, what are the obstacles to increasing their uptake? And what initiatives and policies have been advanced to foster co-operative infrastructure in the digital age?
The report is accompanied by an evidence brief summarizing key findings and policy recommendations.
Books
Cooperatives at Work
For too long, cooperatives have been considered marginal players in the global economy, and as unrealistic venues for the aspirations of new and experienced members of the labour force. This marginalization shows in business, municipal and legal discussions, and curricula, where cooperative structures are rarely mentioned, let alone presented as viable options.
Cooperatives at Work presents a range of success stories in employee ownership and worker owned-and-governed cooperatives. The authors further show how such firms embody important and highly contested ideals of democracy, shared equity, and social transformation. Throughout this volume, the authors present a range of practical lessons, strategies, and resources based on their pioneering, international research.
This latest volume in The Future of Work series has a strong ethical stream, consistent with yearnings for more inspired forms of business revealed in many public opinion polls. The book is future-oriented, using contemporary as well as historical examples to teach lessons that are not necessarily time-bound. It is essential for anyone seeking a window onto the future of cooperative entrepreneurial practice and grassroots democracy.
News
The minimal definition of user agency
This is a great minimal definition of user agency!
Own your ID
Own your content
Own your contacts
The Shared Sequencer
Imagine a world where rollups out of the box could achieve high levels of censorship resistance, ease of deployment, interoperability, fast finality, liveness, and MEV democratization. This may seem like a lofty goal, but with the arrival of shared sequencers, such a world may be within reach. However, not all rollups are created equal, which leads us to questions about how rewards and MEV should be distributed on shared sequencer networks. In this article, we will explore the properties that make shared sequencer networks achievable and what attributes can be achieved.
Shared Sequencer Networks has primarily been covered by Alex Beckett, and later more in-depth by Evan Forbes from Celestia and the Espresso Systems team (as well as Radius), alongside the new incredible pieces by Jon Charbonneau. Josh, Jordan, and their team at Astria are building the first Shared Sequencer Network in production. Astria’s Shared Sequencer Network is a modular blockchain that aggregates and orders transactions for rollups, without performing the execution of said transactions.
From Linked Lists to Namespace Merkle Trees
We’ve covered in relative depth how both Merkle and Verkle trees work in our article “Beyond IBC”. However, there are other applications and derivatives of Merkle trees that we do think deserve their own article, because of their unique properties.
In this article, we are referring to Namespace Merkle Trees (NMT), Merkle Tree Mountain Ranges (MMR), and also Jelly Merkle Trees (JMT). This article will thus serve to explain some interesting aspects of them, their use cases, and hopefully, you’ll want to further explore data structures further.
No. Hyperstructures cannot be valuable-to-own and free forever
Jacob (@js_horne) argues in this piece that it’s possible to build protocols he calls hyperstructures.
In his telling, hyperstructures have these properties:
Unstoppable: the protocol cannot be stopped by anyone. It runs for as long as the underlying blockchain exists.
Free: there is a 0% protocol wide fee and runs exactly at gas cost.
Valuable: accrues value which is accessible and exitable by the owners.
Expansive: there are built-in incentives for participants in the protocol.
Permissionless: universally accessible and censorship resistant. Builders and users cannot be deplatformed.
Positive sum: it creates a win-win environment for participants to utilize the same infrastrastructure.
Credibly neutral: the protocol is user-agnostic.
Overall, I find the idea of a hyperstructure compelling, but I distrust that they can be valuable-to-own while still free-to-use.
Free to use is fine. Valuable to own is fine. However, it’s incomprehensible to say that hyperstructures will be simultaneously valuable-to-own and free-to-use.
Of course, the idea that hyperstructures are a new kind of thing that have this unique property of being both free-to-use and valuable-to-own is Jacob’s big claim. So let’s see if I can be wrong, because if it’s actually possible to build a free-to-use and valuable-to-own structure then it unlocks entirely new economic possibilities. But, if it’s not, then trying to build that kind of thing will only result in failure, or worse, harm.
(Note: Throughout Jacob and I both use “free” to really mean, “gas cost only” which is to say the computational costs of running the protocol. This is probably confusing, and points to an opportunity for better terminology.)
A Call for More Programmable Retro-Funding for Digital Public Goods
Our lives are increasingly moving online, sparking new modes of interaction, communication, and creation. As we transition to a world where we occupy both physical and digital spaces, the concept of a "public good" must evolve accordingly.
Public goods already exist in the digital realm. Open source software is a prime example of a digital public good, underpinning everything from fundamental internet protocols like HTTP to the 'awesome' list of lists.
Yet, despite the significant societal value they generate, open source software projects frequently struggle with the existential challenge of acquiring stable funding.
Part of the problem is that we haven't found the right, digitally-native mechanisms for funding open source. Traditional approaches, from soliciting donations to applying for government grants, aren't a good fit for open source.
Consider Wikipedia, a household name in the realm of digital public goods. You've likely encountered the seemingly ever-present appeal banners asking for donations:Wikipedia’s call to action
If a digital public good as prolific as Wikipedia remains strapped for cash, what can we assume about all the other, less visible open source initiatives?
It doesn't need to be this way. The big tech giants and internet-based businesses continuously vie for our attention, turning every click, like, and share into a monetizable event. This attention economy generates immense profits, a portion of which is paid as taxes to governments in the countries where these companies operate. These taxes are used for traditional public goods, infrastructure, education, healthcare, and more, but these businesses seldom directly fund the digital public goods that underpin their operations and profits.
What if the same type of rigorous analysis used to measure attribution was applied towards measuring impact? What if your favorite delivery app used the same type of programmability that allows you to order a pizza, tip the driver, and pay taxes in one transaction to fund the open source software they depend on?
The Japanese government positions 「 Zebra company 」 in its national strategy ― What is the background and what will happen in the future?
「 Shinta's policy 」 and 「, which were decided by the Cabinet on June 16, and the new capitalist grand design and implementation plan 」, the Government of Japan will now promote 「 Zebra companies. I made it clear that I will 」. (※)
※「 Cabinet decision 」 is a government agreed by all ministers(Cabinet)The most formal form of decision making. At the company, a unanimous board resolution.
In this article, 「 Bone Tai's policy 」, 「 What is the grand design and implementation plan of the new capitalism 」, the trend of government economic policy behind this policy, What policies are being implemented, and、In light of this policy, I would like to explain what we Zebras and Company are trying to do.
Improving the Hacker News Ranking Algorithm
In our opinion, the goal of Hacker News (HN) is to find the highest quality submissions (according to its community) and show them on the front-page. While the current ranking algorithm seems to meet this requirement at first glance, we identified two inherent flaws.
If a submission lands on the front-page, the number of upvotes it receives does not correlate with its quality. Independent of submission time, weekday, or clickbait titles.
There are false negatives. Some high quality submissions do not receive any upvotes because they are overlooked on the fast-moving new-page.
Let’s look at these two issues in detail and try to confirm them with data and some systems thinking tools. All HN submissions are available on BigQuery, which we access via this Kaggle notebook. You can find the SQL queries for reproduction and further exploration in the Appendix.
Unlocking The Power of Onchain Libraries
Throughout history, libraries have been at the center of revolutionary ideas and movements. Libraries empowered individuals and communities to challenge established norms and work toward social, political and intellectual change. Today, the growing polarity, resource inequality and crisis within and among human beings calls for revolutionary ideas. This call is likely why you are here — an intuitive knowing that blockchain transparency, community cooperation and disruption in information through AI are worth exploring in the adventure to create new paradigms.
From the Great Library of Alexandria, a beacon of enlightenment in the ancient world, to the modern, digitally-connected libraries of the 21st century, the power of libraries spans millennia. During ETHDenver 2023, a group of leaders in the cooperative creativity movement came together to share ideas and propose an on-chain library to exchange information and ideas that can help foster cooperative revolutionary change. This project became 0xalexandria.com. 0xAlexandria is an onchain library curating content on the vision for a cooperative economy — one that is communal and for the benefit of all.
Like the Library of Alexandria in ancient Egypt, this community is an intellectual powerhouse that brings together scholars, scientists and philosophers from all over the world. Members get first-look access to new publications, access to private chats with contributors and exclusive invitations to shape the narrative. The goal of this piece is to give a preview into the inspiration, process and key ideas that are emerging so that other community leaders and resource networks can benefit from the practical strategies presented.
The Revolution May Not Be Televised
As we strive for progress, we often overlook the lessons that the past holds for us. Like kids, we feel that earlier generations don’t understand us - as if all our problems are unique and require newfound innovation. We see life as temporal and novel when in reality, we live in a series of cycles - the same happenings repeating themselves over and over again.
I was reminded of this the other day when I rewatched one of my favorite films called Network. Released in 1976, it tells the story of a news anchor about to be let go after years on the job because of low ratings. This triggers psychological distress that causes him to lash out on air about the harsh realities of the world - inflation, recession, crime, conflicts with Russia (sound familiar?). Without giving too many spoilers (because I really want you to watch it), the film follows our protagonist to the underbelly of the media machine - showing his rise to stardom when the network profits off of his “refreshing perspectives” until things start to turn. Our world now faces an eerily similar avalanche of issues, paired with high media skepticism. Unsurprisingly, today only 7% of Americans indicate a high level of trust in news media.
Network called out nearly 50 years ago what has become common knowledge today. Traditional news media is no longer an engine for truth-telling. Their form of “journalism” prioritizes shareholders over people - using sound bites, sensationalist headlines, and narratives that serve political and corporate interests to keep people hooked and ratings high. It is no wonder we refer to “the media” in such a derogatory way as the harm it causes us is palpable.
Community Unchained
Media captures the social realities it digitizes. As we decompose our world into data, new media forms emerge and recompose the contour of the world they represent. With the advent of blockchain technology, the concepts of ownership, provenance, and social connections are now atomized and formalized, leading to the broad adoption and refinement of our latest form of media — Community Media.
“Whence did the wondrous mystic art arise,
Of painting SPEECH, and speaking to the eyes?
That by tracing magic lines are taught,
How to embody, and to colour THOUGHT?”
-Marshall McLuhan, The Medium is the Massage (1)
Today, our virtual worlds are no longer just video games; they are movements built on ownership and accountability. While onchain social networks like DAOs and NFT fandoms are just beginning to emerge, Community Media is set to become the next dominant and widely produced media form, powered by our thirst for belonging.
Why Does Decentralized Media Matter?
Centralized media claims to be populated by myriad sources with diverse ideas and viewpoints. In reality, a mere handful of platforms and corporations control how information flows across the landscape.
And the phenomenon isn't restricted to much-maligned social networks. Google's algorithm does much the same thing, serving up content that fits the search giant's prevailing definition of "valuable" and "authoritative." Big media outlets have taken notice and mastered the art of playing the algorithm in their favor. Just 16 companies, acting as the puppet masters of 560 media brands, dominate search results to the tune of 3.7 billion collective clicks per month.
Employees of these media conglomerates are often expected to crank out large quantities of content that conform to whatever formula commands the most attention. Because big media companies profit mostly from advertising and affiliate commissions, maintaining high search traffic is imperative. The "big 16" are essentially engaged in an ongoing search war, with clickbait headlines and sensational content as the weapons of choice. Content that attracts the most visitors or goes viral becomes the new gold standard for production.
It's a familiar pattern. As author and legal scholar Tim Wu points out in The Attention Merchants, centralized media has used unappetizing—and often misleading—tactics to grab the public's attention since the days of penny papers in the late 1800s. Although the mechanism of distribution has evolved over the decades, the playbook has remained largely the same: A few big companies commission, produce, and syndicate the vast majority of media the public consumes. Locked in a perpetual battle for dominance and advertising dollars, they're driven to produce more of what draws people back. The endless cycle of imitation has little tolerance for originality and stifles media that exemplifies creativity, originality, and insight.
But a more insidious implication underlies centralized media's perpetuation of the banal: When a handful of platforms dominate the public's attention, they also dominate the public narrative. Such power can be exploited to manipulate minds and actions—with results that range from the ridiculous to the horrific.
Media is instrumental in shaping culture, and thus, behavior. Whoever controls the public media narrative wields the power to define and perpetuate cultural norms. The public is largely complacent toward or ignorant of the fact that they allow a handful of corporations to shape their ideas, perceptions, and preferences every time they open Facebook or do a Google search. Whether or not this worldview is beneficial is of little concern to media conglomerates as long as the money comes in.
App-Specific Rollups: A Trade-Off Between Connectivity and Control
Two years ago, app developers faced a fairly simple choice when determining where they wanted to deploy their application: Ethereum, Solana, Cosmos, or maybe a few additional layer-1 chains. Rollups weren’t yet operational, and few had ever heard the phrase “modular stack.” The differences between the L1s (throughput, fees, etc.) were stark and relatively easy to grasp.
Today, the landscape looks quite different. App developers are faced with a much larger array of choices: L1s, general-purpose rollups (both optimistic and zk), advanced IBC infra, rollup-as-a-service providers, appchains, and more. With more choices come more questions, including: whether teams should deploy to a general-purpose rollup or build an app-specific rollup; if they pick a general-purpose rollup, then which one; if they go the app-rollup route, then which SDK/rollup-as-a-service to use; which data availability layer to select; whether EigenLayer can help; how to think about sequencers; and if there will even be a colored orb emoji within Optimism’s Superchain ecosystem left if they choose to go the OP Stack route. It’s overwhelming.
To narrow down the set of questions, this piece will take the framing of an app already deployed on Ethereum that wants to scale within the Ethereum ecosystem. Consequently, the focus will be on the decision tree that app teams face when determining whether to launch their own rollup, my hypotheses around which types of apps are particularly well suited for this infrastructure, and when I think we might hit the tipping point in adoption.
What Was Digital Media?
The story of digital media’s rise and fall is a tragic one. Long ago, there was the age of Walter Cronkite, the “most trusted man in America.” Then came social, which added millions of new voices into the news-making crowd. A wave of internet-first media companies, such as Vox, BuzzFeed, Gawker, and VICE, entered to overthrow the old guard. They were fueled by crazy venture capital valuations and a hip, counter-cultural new aesthetic.
But in the last few years, these publications fell one by one. Stock prices crashed; companies went bankrupt. Writers were hit by layoff after layoff. The winners stayed rich: Facebook, Fox News, The New York Times. Content production assumed a barbell shape, with masses of user-generated content on one end and the Times’s big-budget reporting on the other. Everything in the middle seemed to wither away.
Yet this is also the shift that has given so many a career—tweets and blogs and sponsored Instagram posts—platforms that don’t require unpaid internships and New York connections for people to put their stories out. As Martin Gurri chronicled in The Revolt of the Public and Alan Rusbridger in Breaking News, the end of one model meant the rise of another.
I invited journalists Ben Smith and Taylor Lorenz to talk about the last decade of digital media—how they saw it happen, and what might come next. Ben and Taylor are two of the savviest media thinkers I know: they’ve both been tenured reporters, but have also adapted their careers seamlessly to the age of Twitter, Substack, and the journalist-as-influencer.
Tools for thought in your OODA loop
Boyd’s OODA loop. This diagram is how John Boyd explained his ability to win any dogfight in under 40 seconds.
Observe: sense your environment
Orient: make sense of your senses by constructing a world-model
Decide: translate your model into a plan
Act: do something to effect a change
Your actions cause direct changes to your environment, and ripple effects too. This feeds back into observation. A loop.
There are other loops in this diagram, too. Your mental model—how you orient—effects what you observe, and how you act. Through Boyd’s lens, we begin to see a dogfight as a cybernetic system.
Boyd developed the OODA loop framework to explain how fighter pilots win in a conflict. He drew from many unexpected sources—cybernetics, eastern philosophy, cognitive science, game theory, physics. The resulting framework turns out to have broad applicability beyond conflict. It’s no accident, for example, that the OODA loop is almost identical to the sense-plan-act loop of the robotic paradigm.
ON #174: Bridges 🌉
Coverage on Synapse, Eth2 Bridges, and LayerZero.
Why is Quadratic Funding so Hard?
Quadratic Funding is a promising mechanism for allocating funds among projects that serve a public good.
In particular, among the most promising domain of application for Quadratic Funding is retroactive funding for public goods, which tries to create the possibility that it could be profitable for companies to do good. You can read an excellent primer on that subject and about a project that seeks to implement it here.
The simple way that Quadratic Funding works is that if you donate $4 to a project that you like, you can receive $2 in bonus “matching funds” that come from a benevolent donor. If you donate $9 you’ll receive $3 in matching funds, and so on, where the total funding your project receives from your donation is .
However, this creates a really big problem, which I’ve talked about at length in this article, called The Bribery Problem. In short, if you’re donating to a project, you’d always be better off splitting up your donation so that it comes from many different accounts, since that way you get better matching funds. The easiest way to do this is to bribe people to donate on your behalf. As shown in that article, it’s ridiculously profitable for donors to do this.If it’s not intuitive why this is a good strategy, here’s an article that details ways in which qf can be attacked and defended by BlockScience.
However, as is often seen by these sorts of defenses, the recommendation leaves much to be desired. For example, here the authors suggest that the way to solve the problem is to punish players that “collude optimally”.
Valuing Impact Certificates
Imagine a world where public goods - things like public health, education, pandemic preparedness, basic research, open spaces and national parks, roadways and highways, utilities, public transportation, fire fighting and police services, climate change mitigation and prevention, and open source software - are funded not by centralized entities but through a decentralized, dynamic process. This is the world that Impact Certificates (ICs) seek to create.
An IC is a reward given to social entrepreneurs who contribute to public goods. They essentially act as a form of retroactive funding. When someone helps create a public good, they receive an IC, which has some economic value. This means social entrepreneurship could be regular entrepreneurship. Someone could secure funding by promising an investor a share of the IC they will receive on completion of their public good project.
However, in practice, this doesn't work out as smoothly as it sounds. The main issues are:
ICs are illiquid, so where does their value come from?
Price discovery: what value should they have?
In crypto terms, ICs are often considered a 'shitcoin.'
The Mechanics of Index Payments
The key idea of an index payment is that the amount of each entry you receive is always proportional to the value of the entries in the payer’s wallet. You can test this by noticing that no matter how you change the quantity they pay you, the proportion of USD, Ethereum, and Bitcoin in the payment remains unchanged.
A common misconception is that Index Wallets allow you to either pay with an index payment, or remove your currencies individually. This is not correct. Index Wallets are constrained so that once a currency enters the wallet, it can only ever exit as part of an index payment. It’s permanently mixed.
How not to solve governance
And yet, as I’ve already spoiled, I don’t know how to solve governance. It’s a somewhat difficult problem that we as humanity have yet to solve after several tens of thousands of years of experimentation. Instead, all I have to offer you are a handful of mechanisms that you’ve never seen before, tessellated in ways you haven’t imagined, to achieve emergent results that are tantalizing yet not quite sufficient. As we proceed, I’m going to describe some mechanisms to you and I’ll also describe the kinds of behavior I think they produce. As much as possible, I’ll try to restrain myself from justifying the mechanisms to you, but sometimes I just won’t be able to help myself. By the way, this is about mechanisms, so it’s going to require a hell of a lot of thinking energy, and most of the time we’ll be thinking about the math, then switching to thinking about the emotional experience of the people in the system. If you’re looking for a ra-ra manifesto or dollar signs go speculate on a token or something.
Let’s begin. Governance is primarily about how a community sets policy. Let’s imagine you’re a person who is concerned about AI. You worry that AI is going to be hugely detrimental to humanity, and you believe that AI enablement research (like ChatGPT or Bing Search) should stop until at least we have better theories of how to ensure AI is safe and aligned with the goals of humanity. If you believe that today, what can you do to express that preference in terms of policy on the global stage? There’s lots you can do, you can protest, you can create interest groups, you can build coalitions, you can work on AI safety projects, you can try to influence policies or legislators, etc. What I’ll be offering here is one additional option to that list of ways you get your way.
The first mechanism we’ll enable is a market for policy. Let’s imagine there’s a policy that says, “We should stop AI enablement research.” The first thing we’ll set up is a way for you to pay money to influence the adoption of that policy. For simplicity, let’s consider the case where whichever side stakes the most money gets the policy enacted. So, if your side stakes $1m on pro and the opposing side only stakes 900k against then the policy is immediately enacted. If later the opposing side stakes more, pushing it up to $1.1m against, then the policy automatically flips, and whatever fines or compulsion mechanisms were preventing AI enablement melt away. For now, we’ll imagine that these staking mechanisms are non-reversible, once you’ve staked you’ve staked permanently.
Criteria for Governance
Here I’ll lay out the criteria for a governance system I’d be impressed by. If you have managed to build this, or if you know of governance systems that already meet this standard, please share it! You can find me on Twitter as connormcmk and on Farcaster as nor.
Naturally, as this is my criteria, it’s highly opinionated.
We’re going to look at three categories of criteria for a governance system:
Principles The properties of the system that you work to preserve
Domain The kinds of decisions it can resolve
Tests The real world trials it has undergone
Index Wallets
The stark inequality in wealth distribution and the underfunding of essential public goods — ranging from environmental conservation and education to public health and international peace and security — present pressing challenges. In this article, we’ll explore a crypto economic mechanism called Index Wallets that fund public goods through voluntary taxation. As a side effect of their unique funding method, Index Wallets also induce wealth equalizing dynamics.
Index Wallets arise from two essential properties:
The ability to mint new, counterfeit-proof tokens
A payment mechanism called Index Payments (from which Index Wallets derive their name)
These properties can be implemented in software, and thereby permit adoption by anyone.
THE KEY TO NATURE’S INTELLIGENCE IS ARTIFICIAL INTELLIGENCE
In just four short weeks over March and April, LLMs became commoditised. Since Meta’s ‘leak’ of its foundational LLM, LlaMa, there has been a Cambrian explosion of open-source models and associated tooling built on LlaMa’s foundation. Start-ups building a moat anchored upon the strength of their algorithms now look more precarious than ever.
For tech founders today, focusing on building a proprietary and differentiated dataset is therefore even more important. This is because the public models can go incredibly deep in areas where there is a large, annotated and publicly available corpus. It's clearly not true to say ChatGPT is wide but shallow. Ask it for a medical diagnosis or about nuances of English vs American contract law, and you'll see it can go deeper and way, way faster than the nerdiest PhD (despite not always being 100% accurate). But bring it into a realm of sparse data, with limited pre-existing inference, and watch it react with a shrug emoji. The model remains the same but is hamstrung by the lack of data.
All this means there is a colossal opportunity for start-up building in areas where:
Data is poor and difficult to collect; and
If this data is collected and fed into the latest algorithms there are immense and commercially relevant problems that can be solved.
Nature is uniquely suited to this opportunity. Where life exists, chaos reigns. Detecting patterns in messy biological datasets and then making predictions based upon these patterns is a perfect-use case for AI.
The commercial opportunity to leverage nature’s intelligence is immense. Humanity has been tweaking nature’s behaviour for millennia to produce food and materials. And now we’ll need more efficient ways to produce more food and more materials, alongside recovering biodiversity. Perhaps most significantly though we need to understand nature’s mechanisms at an ecosystem and planetary scale, thus enhancing our ability to mitigate and adapt to climate change.
Coordi-nations: A New Institutional Structure for Global Cooperation
Networked communications have enabled new ways for people to coordinate and to engage in collective action, achieving shared goals and upholding shared values. Meanwhile, existing governance institutions — including governments and nation states — are failing to keep up with the changes brought about by these networked technologies. In this essay, we elaborate upon the notion of “coordi-nations” as a new type of organizational structure that can foster cooperation at a (local and) global scale, through shared values and participatory decision-making. Coordi-nations are a new form of network sovereignty that spans traditional geographical boundaries. By harnessing the power of digital communities and modern information technologies, coordi-nations provide innovative solutions to complex global coordination challenges, by promoting cooperation and acknowledging interdependencies amongst transnational communities.
Regulating AI in the UK: three tests for the Government’s plans
It seems as if AI is everywhere you look right now – not only in new and emerging use cases across different business sectors, but implicated in every conversation about present and future societies.
Coverage of ‘foundation models’ that power systems like ChatGPT, the potential for job displacement, and the need for ‘guardrails’ are converging public and political interest in how AI is regulated.
Against this noisy backdrop, jurisdictions around the world are publishing regulatory proposals for AI,1 and the UK is no exception. The UK Government is currently consulting on its AI Regulation White Paper, as well as passing the Data Protection and Digital Information (DPDI) Bill– which will reduce rather than enhance AI-relevant safeguards.
The White Paper is an important milestone on the UK’s journey towards comprehensive regulation of AI, and at Ada we have welcomed the Government’s engagement with this challenge.
Its proposals will shape UK AI governance for years to come, affecting how trustworthy the technology will be, and – when things go wrong – how well people are protected and how meaningfully they can access redress.
It adopts a much more distributed model for regulation than proposed elsewhere, creating a more challenging path to achieving these outcomes, while promising more proportionate governance that enables companies to innovate.
Ahead of the White Paper consultation closing on 21 June, we explain significant features of the UK Government’s proposals and how we intend to test them against three challenges: coverage, capability and criticality.
You're writing require statements wrong
TL;DR
Don't just write require statements for a specific function; write require statements for your protocol. Function Requirements-Effects-Interactions + Protocol Invariants or the FREI-PI pattern can help make your contracts safer by forcing developers to focus on protocol level invariants in addition to function level safety.
Why DeFi is Broken and How to Fix It, Pt 1: Oracle-Free Protocols
I love DeFi. The promise of permissionless payment protocols and an open financial system are what attracted me first to Bitcoin and then the wider world of crypto.
Reading an introduction to Uniswap in 2018 opened my eyes to the power of what was starting to happen in the part of the crypto world that would come to be known as DeFi (though I still prefer Open Finance).
But after years of repeated hacks and billions of dollars stolen, it’s reasonable for even the most ardent believers to question whether DeFi will ever be suitable for mainstream use, let alone to become a central piece of the global financial system.
Here’s the answer: it won’t… at least not in the way it’s currently being built.
We’ve invested heavily in security at Nascent. In 2020, we were the first investors to put a Tier 1 audit firm (Open Zeppelin) on retainer on behalf of our portfolio companies, to secure priority access to rigorous security reviews prior to launch. We were early backers of Spearbit, Code4rena, Macro, and Skylock.
We’ve also invested our time into creating tools for the industry. The Simple Security Toolkit has been forked nearly 100 times, and we recently announced the beta release of Pyrometer, an open-source tool for auditors and developers, which mixes symbolic execution, abstract interpretation, and static analysis.
Through these efforts, we’ve come to believe it’s not just a question of “trying harder” on the security front. That’s necessary, but not sufficient—industry-wide, the frequency and severity of exploits is at least two orders of magnitude above what might be considered acceptable levels for mainstream adoption.
In 2022, over $3.8B was stolen via crypto hacks, largely via exploits of DeFi protocols and bridges. While some exploits were due to disturbingly poor security posture, even protocols developed by well-regarded teams who followed current best-in-class processes were not immune.
If we want to see billions of people rely upon DeFi, we need to fundamentally rethink how protocols are designed and secured.
AI, THE COMMONS, AND THE LIMITS OF COPYRIGHT
In other words, the object of this appropriation are not copyrighted works but rather the “sum total of human knowledge that exists in digital, scrapable form”. This is a case of the paradox of open: it is open access to this digital commons that has enabled the creation of the current crop of generative ML models, and it is at this level that we will need to address ongoing appropriation and develop means of transferring some of the value created back to society.
Much of this digital commons consists of works that are free of copyright, openly licensed, or the product of online communities where copyright plays at best a marginal role in incentivising the creation of these works. This is another reason why the response to the appropriation of these digital commons cannot be based on copyright licensing, as this would unfairly redistribute the surplus to professional creators who are part of collective management entities and a small subset of those who created the digital commons.
So if this appropriation is “not accounted for by existing laws”, how shall it be dealt with then? At this stage (and given the global scope of the question) it seems dubious if (national) laws can deal with it.
Instead, we should look for a new social contract, such as the United Nations Global Digital Compact[4], to determine how to spend the surplus generated from the digital commons. A social contract would require any commercial deployment of generative AI systems trained on large amounts of publicly available content to pay a levy. The proceeds of such a levy system should then support the digital commons or contribute to other efforts that benefit humanity, for example, by paying into a global climate adaptation fund. Such a system would ensure that commercial actors who benefit disproportionately from access to the “sum of human knowledge in digital, scrapable form” can only do so under the condition that they also contribute back to the commons.
Goodbye & Final Reflections 👋
Good morning friends —
After a lot of reflection, I’ve decided it’s time for Ideamarket in its current form to close its doors.
Our official product costs money to run properly, and after nearly 6 months of financial struggles, it’s become unfeasible to maintain it at this time. There’s a chance we’ll be back up and running someday, but it seems prudent to manage expectations.
Here’s what you need to know:
The main website will remain live through February to make it as easy as possible to withdraw funds. After March 1, I can’t guarantee you’ll be able to use our website to access locked IMO, or to withdraw funds from earlier versions of Ideamarket (back to Feb 15, 2021). After March 1, you may need to call the smart contracts directly to retrieve funds. (Here’s the Guide to Selling Old Tokens & Unstaking.)
The Ideamarket Discord will remain open. Token holders are also invited to discuss, coordinate, and develop new projects at https://commonwealth.im/ideamarket/
I am seeking a new primary income stream outside of Ideamarket, so while I will be happy to advise community projects, I may not have much time, even though I still believe in Ideamarket’s approach.
We’ll keep our docs live at docs.ideamarket.io, so you can read about our product and philosophy.
Our code will remain open-source at github.com/ideamarket
A few concluding thoughts and predictions:
Tools
DAO Diplomat
The goal of this app is to help tokenholders navigate proposals with AI to source, diligence, and evaluate proposals in 90% less time. This frees up time spent on community governance, for a more enjoyable governance experience.
everyname
Everyname's advanced protocol fetches wallet addresses and names from any blockchain name service.
Chainverse API
The Chainverse API provides unparalleled user insights about wallets, by connecting wallets with linked identities, categorizing wallets by Web3 interests and activities, and empowering users with free-text search.
ChatData
🌟
Chat with 2 milions arxiv papers, powered by MyScale
We provides you metadata columns below for query. Please choose a natural expression to describe filters on those columns.
For example:
What is a Bayesian network? Please use articles published later than Feb 2018 and with more than 2 categories and whose title like computer and must have cs.CV in its category.
What is neural network? Please use articles published by Geoffrey Hinton after 2018.
Introduce some applications of GANs published around 2019.
Quests
Easily spin up a page for your next thing. Gather support, broadcast updates, and celebrate your work with a single link
Events
Autonomous Ecologies
Its roots trace back to the ARPANET, a project by the U.S. DoD, created primarily as a tool for swift communication and knowledge sharing among universities and research institutions. As the internet evolved from its academic origins into a commercialized platform, user autonomy was often overlooked, leaving significant control in the hands of ISPs and major tech companies. Web3, i.e. the union of cryptocurrency with the internet, promised a more decentralized and equitable digital landscape that avoids the monopolistic and privacy-invasive practices of Web 2.0.
Yet, incessant legal obstacles, multi-million dollar hacks and the threat of pervasive surveillance have instilled skepticism about Web3’s vision of a better internet. While we could propose a Web4, Web5, or Web6 to counter these issues, we believe the root of our troubles may lie in the foundational infrastructure of the Web itself. Hence, we propose a new path to address the deeper problems of the Web as whole. Our aim is not just to rethink network technologies, but to venture into unexplored territory: the Post-Web.
Accelerating Worker Ownership: A Strategy Session on Co-operative Development in the Digital Economy and Beyond
Co-op models have a marginal position in business education, the technology industry, and the popular imagination. In response, co-operators and their allies have created incubators, accelerator programs, and mutual-aid networks to support early-stage tech co-ops.
Join us for an online panel facilitated by co-op researcher-practitioner Emi Do that brings together presenters from several such projects: CoTech, Exit to Community Collective, Platform Cooperativism Consortium, SPACE4, Start.coop, UnFound Accelerator, and Union Cooperative Initiative. These projects advance democratic business formation and co-op theory-building, and they offer valuable lessons on the promises and challenges of accelerating worker ownership today.
This panel will explore goals, strategies, and dilemmas of co-operative development in the digital economy and beyond. It will also provide participants with an opportunity to connect with peers and allies.
Accelerating Worker Ownership is also a launch event for a new, in-depth analysis, Co-operatives, Work, and the Digital Economy: A Knowledge Synthesis Report, by Greig de Peuter, Gemma de Verteuil, and Salome Machaka.
Videos & Podcasts
Blockchain Radicals: How Capitalism Ruined Crypto and How to Fix It | Book Talk
Where I talk about my upcoming book published through Repeater Books on the Crypto Leftists discord as well as a Q&A session afterwards.
Find the book here: https://repeaterbooks.com/product/blo...
Over the last decade, blockchains and crypto have opened up a new terrain for political action. It is not surprising, however, that the crypto space has also become overrun by unscrupulous marketing, theft and scams. The problem is real, but it isn’t a new one. Capitalism has ruined crypto, but that shouldn’t be the end of it.
Blockchain Radicals shows us how this has happened, and how to fix crypto in a way that is understandable for those who have never owned a cryptocurrency as well as those who are building their own decentralised applications. Covering everything from how Bitcoin saved WikiLeaks to decentralised finance, worker cooperatives, the environmental impact of
Bitcoin and NFTs, and the crypto commons, it shows how these new tools can be used to challenge capitalism and build a better world for all of us.While crypto is often thought of as being synonymous with unbridled capitalism, Blockchain Radicals shows instead how the technology can and has been used for more radical purposes, beyond individual profit and towards collective autonomy.
Collaborative Finance: Credit clearing for collectively doing more with less capital
For this episode I spoke to Ethen Buchman (@buchmanster) and Tomaž Fleischman (@T_Fleischman) in person while at the Commons Hubin Austria from May 22nd to 28th for the Collaborative Financeevent with the Crypto Commons Association, one of the projects part of the Breadchain Cooperative. Ethan is one of the co-founders of Cosmos and works at Informal Systems, a workers cooperative that focuses on building Cosmos infrastructure with Tomaž.
A big theme of the Collaborative Finance event was about the paper Tomaž co-wrote while working with Sardex about multilateral trade clearing setoffs, or giving the ability of credit clearing to everyone rather than it being just something banks get to do. During the interview we spoke about what happened at the event, the political implications of giving credit clearing to everyone, and how this type of system could be implemented in Cosmos.
ICYMI I’ve written a book about, no surprise, blockchains through a left political framework! The title is Blockchain Radicals: How Capitalism Ruined Crypto and How to Fix It and is being published through Repeater Books, the publishing house started by Mark Fisher who’s work influenced me a lot in my thinking.
The official release date is August 8th, 2023, but you can already pre-order the book here from Repeater
Health X Change | FIAT LUX Podcast #7
In this episode of FIAT LUX, we're joined by one of the founders of Health X Change, a revolutionary platform that's pushing the boundaries of medical research. Health X Change is built on the belief that patients should be compensated for the data they own, especially when it's being bought and sold for thousands of dollars without their knowledge.
Their mission is to create a system where patients can financially benefit from donating their data to medical research. They're not just about fair compensation, though. Health X Change is also committed to accelerating the development of life-saving therapeutics for the next generation.
Join us as we delve into the workings of Health X Change, discuss their vision for the future of medical research, and explore how they're changing the game by putting patients first.
Mini Series - DeJourno Ep. 04 by Eureka John | Web3 Incentivizing Journalism as a Public Good
In this episode, we focus on funding and journalism as a public good.
We start by defining it and later on exploring ad-driven revenue and how it affects the algorithms in journalism.
What are some tools we can use to help incentivize journalists to work outside of the ad model?
We want to build tools for journalists that are as pure and untainted by financial and business interests as possible, but we can envision ways that some of those tools might be spun out and adapted for non-journalism commercial ventures.
We explore how communities can use web3 to build their own decentralized, community-owned, not ad-driven, news outlets. In the long run, we're interested in experimenting with new business models for journalism and rebuilding communities in news deserts. This is a little more abstract, but we can envision a role for investors to build some of the base layer tools that will enable this process without interfering with the editorial process.
Guests:
Keith Axline - @kaxline - JournoDAO, Republic.io
Crystal Street - @cstreet - JournoDAO
Eric Mack - @ericcmack - JournoDAO
Spencer - @clinamenic - JournoDAO
Humpty Calderon - @humptycalderon - Crypto Sapiens, Orange Protocol
Tweets
Thank you for reading Distroid!
I hope you enjoyed this week’s issue.
Please send a message to ledgerback@gmail.com or @distroid_ if you have any questions, comments, or other feedback on this week’s newsletter or on Distroid in general.