I'll spell out a concrete toy version:
There are two people in the world: Alice and Bob. They have unequal levels of anthropic measure/reality fluid (one 95%, one 5%). You are Alice. You can steal Bob's pie. Should you?
Behind the veil of ignorance it's good to transfer utility from the person with less reality fluid to the person with more reality fluid. But who's the one with more reality fluid, Alice or Bob? It's probably Alice! How do you know? Because you're Alice! Steal that pie, Alice!
I'm reminded of the egotistical the Copernican principle: "I'm a typical observer in the universe" is equivalent to "Typically observers in the universe are like me".
People have thought about it but I also don't know of anything extensive being written. Counting discretely if a consciousnesses "exists once" or "exists twice" don't seem like the right take to me. What happens if you cut the circuits in your CPU into 10 pieces that each do the same computation independently -- do you get 10 times as many conscious beings? (A possible solution to this puzzle is that the "anthropic measure" of a computation has a multiplicative factor by the number of bits the computation erases, which would mean that reversible computers aren't conscious)
[EDIT] I'll mention that reversible computations, even if not part of a quantum computer, are quantumly weird. We're not sure about much about anthropics, but one thing we are confident in is the Born rule. Whatever your interpretation of quantum mechanics is, it probably involves decoherence. And reversible computations don't decohere.
My bigger point is I think people are doing this:
Matthew Yglesias wrote an article about how vaccine policy is the most shocking thing Trump has done. If you go by gut feel, that might seem like a reasonable guess. If you go by math, I don't think it's close to passing a fermi estimate[1].
Addressing your comment: yes, the US population is twice as high now as in the 1950s, and that's a good point. But the case fatality rate for diseases like measles is >2 times lower in 2026 hospitals than 1950's hospitals (1950s hospitals had few ventilators). City densities have not increased since the 1950s (we are more suburban), but doesn't matter for measles because it's R_0 is too high and ≈nobody escaped measles in the 1950s even in rural areas. If we ignore all of that and say at 90% vaccination 2,000 people die per year, 4 times as many as in the 1950s with 0% vaccination, that's still less than one tenth the impact of smoking public health policy changes in the 1990s. I don't guarantee I'm correct on all math and facts here but I think it's overdetermined.
In the 1950s, with 0% vaccination rate, measles caused about 400-500 deaths per year in the US. Flu causes are about 20,000 deaths per year in the US, and smoking perhaps 200,000. If US measles vaccination rates fell to 90%, and we had 100-200 deaths per year, that would be pointless and stupid, but for public health effects the anti-smoking political controversies of the 1990s were >10 times more impactful.
I think your analysis is incorrect
Well, it's kinda true, right? Ontogeny recapitulates phylogeny (developing embryos look like worms, then fish, amphibians -- they mirror the path of evolution). That's because it's easier for evolution to add steps at the end than to changes steps in the beginning. It happens with computers too -- modern Intel and AMD chips still startup in 16 bit real mode.
In the coalition of genes that make it into a gamete, newer genes support the old genes, but not vice versa. The genes that control apoptosis (p53 etc.) are obligate mutualists -- apoptosis genes support older particular genes, but older genes don't support apoptosis genes in particular.
A: It's bizarre that "open" would be the opposite of "closed" everywhere except this one term.
The protest of topology students everywhere.
Make sure the people on the board of OpenAI were not catastrophically naive about corporate politics and public relations? Or, make sure they understand their naïveté well enough to get professional advice beforehand? I just reread the press release and can still barely believe how badly they effed it up.
It amuses me that ElectionBettingOdds.com is still operational and still owned by FTX Trading Ltd.
I think it is similar to option 4 or 6?
Yep! With the addendum that I'm also limiting the utility function by the same sorts of bounds. Eliezer in Pascal's Muggle (as I interpret him, though I'm putting words in his mouth) was willing to bound agents subjective probabilities, but was not willing to bound agents utility functions.
Why is seconds the relevant unit of measure here?
The real unit is "how many bits of evidence you have seen/computed in your life". The number of seconds you've lived is just something proportional to that -- the Big Omega notation fudges away proportionality constant.
Back in the GOFAI days, when AI meant A* search, I remember thinking:
Now transformers appear to be good at System 1 reasoning, but computers aren't better at humans at everything. Why?
I think it comes down to:
Computers' System 1 is still wildly sub-human at sample efficiency; they're just billions of times faster than humans
LLM's work because they can train on an inhuman amount of reading material. When trained on only human... (read more)
Suppose there were some gears in physics we weren't smart enough to understand at all. What would that look like to us?
It would look like phenomena that appears intrinsically random, wouldn't it? Like imagine there were a simple rule about the spin of electrons that we just. don't. get. Instead noticing the simple pattern ("Electrons are up if the number of Planck timesteps since the beginning of the universe is a multiple of 3"), we'd only be able to figure out statistical rules of thumb for our measurements ("we measure electrons as up 1/3 of the time").
My intuitions conflict here. One the one hand, I totally expect there to be phenomena in physics we just don't get. On the other hand, the research programs you might undertake under those conditions (collect phenomena which appear intrinsically random and search for patterns) feel like crackpottery.
Maybe I should put more weight on superdetermism.
Epistemic status: lukewarm take from the gut (not brain) that feels rightish
The "Big Stupid" of the AI doomers 2013-2023 was AI nerds' solution to the problem "How do we stop people from building dangerous AIs?" was "research how to build AIs". Methods normal people would consider to stop people from building dangerous AIs, like asking governments to make it illegal to build dangerous AIs, were considered gauche. When the public turned out to be somewhat receptive to the idea of regulating AIs, doomers were unprepared.
Take: The "Big Stupid" of right now is still the same thing. (We've not corrected enough). Between now and transformative... (read more)
"When you talk about the New York Times, rational thought does not appear to be holding the mic"
--Me, mentally, to many people in the rationalist/tech sphere since Scott took SlateStarCodex down.
I think this is weirder than most anthropics. Different levels of reality fluid in non-interacting worlds? Great. But if Alice and Bob are having a conversation, or Alice is stealing Bob's pie, they're both part of a joint, interactive computation. It's a little weird for one part of a joint computation to have a different amount of anthropic measure than another part of a computation.[1]
Like we can stipulate arguendo that it's anthropically valid for Elon Musk to think "I'm Elon Musk. Much of lightcone will depend on me. The matrix overlords will simulate me, Elon Musk, thousands of times more, and make me a thousand times more real, than any of... (read more)