The Density Index
My first Ethereum conference was 20,000 people. The Amsterdam tourist district was a drunken, crypto merch explosion. I wondered how many were setting themselves up to get pickpocketed.
I had a game plan. Find the venues where someone was putting in effort for a great experience — not big-name speakers or perks, but things like good food, communal spaces away from the megaphones. Spaces that were designed with experience and care. It's kind of like picking courses based on good teachers, not subjects, except with conference organisers.
After the talks, we've all seen small crowds form around speakers. That's where you find people really into a topic, waiting their turn to dig into something they really care about. The speaker's coming down from nerves, just wants a bottle of water, and the whole conference environment isn't built for this moment to be good. But it's where you find create conversations.
And if it's your topic too, it's where you find your people.
What struck me about the whole Blockchain Week experience was the exclusive events. Tickets sold out on the event pages. Bouncers at the door. Then you'd get inside and find a mostly empty room. Like clubs that keep a line outside to look full.
A group of us figured out a better way. Check which events your friends were at. Show up. They'd come to the door, maybe grab the organiser, who'd be happy to let in people doing real work in the community. No tickets needed.
But I had to be missing something. Crypto was a community that had been building for almost a decade. Economic design was what they did. And nobody had really tried to solve the economic design problem they were all living through every summer?
The design problem
Blockchain Weeks have a no-show problem. RSVPs are free options, organisers compete on hype, and the events that could really move people forward get washed away. Years later, I'm building zKal, and the Proof Projector architecture it uses gives you a privacy-preserving way to publish aggregate signals from private attendance proofs.
But it doesn't tell you what to aggregate. That turns out to be the harder question.
It's great actually. The constraint brings the design challenge into sharp focus.
When you can't see raw attendance history — because privacy — you have to ask: what's the thing across all this data that tells you something real?
It breaks down into two specific functions:
- Projector Function: What single, anonymous number comes out of an individual's private history?
- Aggregation Function: How do you combine those so they're useful to others in the community?
What the weight needs to capture
If you've run meetups or conferences, you've watched people move through stages.
First, someone's checking out the scene. Is this a place for me? They're scanning the topics, the vibe, the people. Big-name speakers help here — trustworthy authorities who high-level the current issues, a fast track to understanding what matters.
Then they start applying what they've learned. Their needs shift. They want practitioners who are various distances ahead of them on their specific path. Too many big names actually get in the way at this point. They become noise to a signal that's about practicalities — projects working and learning in public, people who've been building for six months or a few years and have hard-won things to share.
The crypto ecosystem is full of these people, but the conference structure very rarely surfaces them. I'd argue this is the one of the big reason we have a chasm between many with deep interest and only a few succeeding projects. (The other is over-funding kills projects, but that's another thing altogether.)
On the other side, you get the committed communities. Specialised workshops, cohort programs, or just small rooms of people around hands-on practical topics — seemingly niche but major unlocks for anyone in that space.
A good weight function should reflect where someone is in this progression, and match them to events where they'll get the most out of it.
The formula
When someone proves eligibility for a ticket, the proof carries properties derived from private inputs: recency, event type, event size. Enough to generate a single weight. Not unique, not traceable.
Only the weight enters the aggregate layer. The person doesn't.
participant_weight = recency × trust × size_normalization
Recency uses exponential decay — weight halves every 12 months since attendance. Recent engagement matters more, but older participation still contributes. Nobody gets permanently locked in or locked out.
Trust reflects how often a prior event is cited as an access credential by other events in the ecosystem. It's bounded between 0.33 and 0.66 on an S-curve — starts increasing after 2 independent citations, saturates after 6. This prevents any single anchor event from monopolising reputation while rewarding events the ecosystem organically values.
Size normalization is the inverse of the prior event's RSVP count. Smaller, more focused gatherings carry stronger signal.
The event's public score is:
Density Index = mean(participant_weights)
The Density Index is a single public number per event, visible in realtime on the calendar. It tells you how likely this event is to have a high density of people who actually care about its topic.
That's the model I started with. Grounded in years of running meetups and conferences, encoded into math. The question was whether the math would agree with the experience — and where it would break.
Breaking it
I vibe-coded a scenario simulator and threw two years of blockchain week event listing data at it. No historical attendance records existed, only event listings — so the exercise was about ranging assumptions across scenarios and looking for where the model fell apart. Each run would push a different set of permutations; event hosts playing the Density Index game, and participants reacting to it.
I kept running it, scanning for outcomes that looked wrong or broke free from what the incentives were supposed to do.
Running it again and again and again.
Curation
The first thing that broke was curation. This was broadly expected. It's the obvious way to game the system.
Hosts that want to attract big crowds can select many qualifying communities for early access — cherry-pick attendees from the biggest or most exclusive events. The simulations showed an optimal range of 3–4 relevant intake events, with an inflection point around 5. Go beyond that and you're not curating, you're harvesting.
So the Density Index gets a curation penalty — an S-curve centered at 5 listed qualifying events. The farther from that range, the steeper the cost.
Density Index = mean(participant_weights) × curation_penalty
The modeling was about finding the right shape for the penalty — where to set the inflection point, how steep to make the curve.
Collusion
The curation fix surfaced a deeper problem.
A related event adds weight to yours through the trust factor — but that relationship can be faked. Two organisers list each other as qualifying access events, and points go up for both.
To understand why this is tricky, you need to see how the crypto events ecosystem actually works. There are natural cliques. Polkadot events have their specific crowd. The ZK chains are kind of the same people, competing with each other but sharing a core community. DAOs and the pro-social funding crowd are another cluster. These in-groups form organically around shared technical interests. Some of that is healthy. It's how communities cohere, but the exclusiveness also makes them atrophy.
Separate from the cliques, there's a small network of event marketing people who work across the better-funded projects. They co-promote constantly, and it produces a sameness across events — similar lineups, similar sponsors, similar vibes. This creates a form of de facto collusion, horizonal integration for conference visibility. Not great, but it's how the conference industry works.
Then on top of both layers, the Density Index introduces a new game: farming points. Mutual cross-listing for free score. Vouching rings. Circular amplification.
The model needed to reward two things at once: a returning core community (which shows your event is good and your community is strong enough that attendees will find great conversations easily) and genuine openness — pulling in diverse groups based on topic-interest rather than who-you-know access. True inclusiveness, not in-grouping.
Then it needed to penalise the gaming layer without punishing the organic collaboration underneath.
The answer was a steep anti-collusion penalty when two events cite each other above 40% mutual overlap.
Density Index = mean(participant_weights) × curation_penalty × anti_collusion_penalty
As an event host, you can collaborate broadly. But the Index rewards you for having a distinct community core and for drawing in people from genuinely different communities. Not for trading favours with three friends to look exclusive.
Scarcity and sorting
The penalties had a side effect I noticed when I looked at the scenarios qualitatively.
When early access is limited — for both hosts granting it and participants earning it — exposing the right events increased demand for them, and that demand moved scarcity to them. Anchor events got more exclusive too. And participants started self-selecting into topics that actually mattered to them.
This maps to the progression I described earlier. The newcomer checking out the scene gravitates toward big-name events — broad, authoritative, good for orientation. But someone who's been applying skills for a while, who needs practitioners and practical work, starts gravitating toward the focused rooms where those people are. The Density Index was matching the-events-you-want-now with the-events-you-can-get-into-first.
The model was shifting high scores away from large general events toward focused rooms with strong community stickiness — with the exception of anchor events that the whole ecosystem genuinely draws from. The design was encoding the sorting that conferences have never managed to produce: people finding the rooms that match where they actually are, not where the hype tells them to be.
Whether this plays out in practice is the experiment. But the simulations showed the mechanism — expose quality, distribute scarcity, and people sort themselves toward depth over time.
Small samples
One practical problem: early-stage events with fewer than about 10 signups were wildly volatile. A few strong attendees could spike a score. A few early signups with no history could crater it.
The fix is Bayesian shrinkage toward the ecosystem mean for events under 30 RSVPs — a weighted blend of the event's raw Density and the network baseline, stabilising scores until there's enough data to trust them.
Density Index = mean(participant_weights) × curation_penalty × anti_collusion_penalty × baysian_shrinkage_factor
Without it, your newest events will be the noisiest, with the least trustable score. And we want the opposite effect, to help people find the small but good events that are right for them.
Those are where the strongest connections and relationships form.
The new game this creates
Right now, event organisers play one game: maximise RSVPs, compete on hype. The currency is attention. Free drinks, big-name speakers, aggressive cross-promotion with anyone who'll have you.
The Density Index creates a second game. Your event has a public score — a signal of whether your attendees actually show up, how strong your community core is. That score gets you featured on the calendar. That score tells someone with five choices to pick you.
Returning members become your most valuable asset. You start winning by focusing on building your own community snowball effect, which means serving them better over time.
It shifts the intake goals towards which communities you invite, not just how many people you get through the door.
The cynical version of this game still exists. A competitive event will notice that pulling their strongest competitors' community members spikes their weight. Two friendly organisers will notice that mutual cross-listing earns free points.
But I'm hoping we can still tip them towards a better one.
When your score compounds with genuine retention — people who came last time, brought a friend, came back again — you stop optimising for footfall and start optimising for your community members' success. You want the kind of attendee who'll be building on what they learned six months later. Who'll introduce two people who end up collaborating. Who'll come back because last time mattered. Because when you have a room full of those people, your newcomers meet the people they need to to succeed for themselves.
That's a different conference than the one everyone's been building. It's a different conference week than anyone's ever seen.
Whether the score makes enough organisers play the second game to shift the ecosystem — or whether the edge cases eat it first — is the question worth finding out.