I'm completely not getting this. If all possible mind-histories are instantiated at least once, and their being instantiated at least once is all that matters, then how does anything we do matter?
If you became convinced that people had not just little checkmarks but little continuous dials representing their degree of existence (as measured by algorithmic complexity), how would that change your goals?
Also "standard model" doesn't mean what you think it means and "unpleasant possibility" isn't an argument.
the most important adaptation an ideology can make to improve its inclusive fitness for consumption by the human brain is to
1 is accomplished by making the ideology rest on a priori claims. everything that rests on top of that claim can be perfectly logical given the premise. since most people don't examine their beliefs axiomatically, few will question the premise as long as they are provided the bare minimum of comfort. 2 is accomplished by activating the "mor...
The data you point to only seem to suggest the universe is large; how do they also suggest it "is large relative to the space of physical possibilities"? The likelihood ratio seems pretty close as far as I can see.
With steven, I don't see how, on your account, any of your actions can in fact effect the "proportion of my future selves to lead eudaimonic existences". If people in your past couldn't effect the total chance of your existing, how is it that you can effect the total chance of any particular future you existing? And how can there be a differing relative chance if the total chances all stay constant?
Steven, I call the little continuous dials the "amount of reality-fluid" to remind myself of how confused I am.
"Unpleasant possibility" isn't an argument but I didn't feel like going into the rather complex issues involved (probability of UnFriendly AI running ancestor simulations, how many of them, versus probability of Friendly AI, versus probability of hitting the Unhappy Valley with a near-miss FAI or a meddling-dabbler AGI trained on smiling faces, versus probability of inhuman aliens creating minds that we care about, plus going into the issues of QTI).
Nazgul, you can act swiftly to capture all resources in your immediate vicinity regardless of whether you plan to share them out among few or many individuals.
Robin, spatial infinity would definitely be large relative to the volume of physical possibilities (infinite versus finite). With many-worlds and a mangling cutoff... then not every physical possibility would be realized, but I would expect most possible babies would be. All the babies worth making could be duplicated many times over among the Everett branches of all moral civilizations, even if any given branch kept their populations low and living standards high. Does it look different to you?
Most of the concepts here are ethical. Whether some contraption has the same personal identity as you do, and whether it's good to have that contraption copied/destroyed, is a moral question, in a case when the unnatural concept of what's right gets extended to very strange situations. Whether we cut this question in terms of personal identity or patterns of elementary particles is a matter of cognitive algorithm used to determine the decision. It doesn't matter whether an upload is called "the same person" as its biological preimage, it only mat...
Eliezer, our data only show that the universe looks pretty flat, not that it is exactly flat. And it could be finite and exactly flat with a non-trivial topology. On if all babies are duplicated in MWI, it seems to depends on exactly what part of the local physical state is required to be the same.
Vladimir, many of these anthropic-sounding questions can also translate directly into "What should I expect to see happen to me, in situations where there are a billion X-potentially-mes and one Y-potentially-mes?" If X is a kind of me, I should almost certainly expect to see X; if not, I should expect to see Y. I cannot quite manage to bring myself to dispense with the question "What should I expect to see happen next?" or, even worse, "Why am I seeing something so orderly rather than chaotic?" For example, saying "I only care about people in orderly situations" does not cut it as an explanation - it doesn't seem like a question that I could answer with a utility function.
I have not been able to dissolve "the amount of reality-fluid" without also dissolving my belief that most people-weight is in ordered universes and that most of my futures are in ordered universes, without which I have no explanation for why I find myself in an ordered universe and no expectation of a future that is ordered as well.
In particular, I have not been able to dissolve reality-fluid into my utility function without concluding that, by virtue of carin...
Eliezer, I don't think your reality fluid is the same thing as my continuous dials, which were intended as an alternative to your binary check marks. I think we can use algorithmic complexity theory to answer the question "to what degree is a structure (e.g. a mind-history) implemented in the universe" and then just make sure valuable structures are implemented to a high degree and disvaluable structures are implemented to a low degree. The reason most minds should expect to see ordered universes is because it's much easier to specify an ordered ...
and where I just said "universe" I meant a 4D thing, with the dials each referring to a 4D structure and time never entering into the picture.
I was going to make about the same objection steven makes -- if you take this stuff (MWI, anthropic principle, large universes) seriously as a guide to practical, everyday ethical decision-making, it seems to lead inexorably to nihilism -- no decision you make matters very much. That doesn't sound at all desireable, so my instinct is to suspect that there is something wrong either with the physics ideas, or (more likely) with the way they are being applied. But maybe not! Maybe nihilism is valid, but then why are we bothering to be rational or to do any...
"It seems to me that in a Big World, the people who already exist in your region have a much stronger claim on your charity than babies who have not yet been born into your region in particular."
This doesn't make sense to me. A superintelligence could:
A superintelligence could create a semi-random plausible human brain emulation de novo, and whatever this emulation was, it would be the continuation of some set of human lives.
A superintelligence could conduct simulations to explore the likely distribution of minds across the multiverse, as wel
Carl, that assumes QTI, i.e., no subjective conditional probability ever contains a Death event. Things do get strange then.
Eliezer: I'm not sure you'd really get much interference effects between indistinguishable hubble volumes.
What I mean is you'd need some event that has in its causal history stuff from two "equivalent" hubble volumes, right?
Otherwise, well, how would any nontrivial interference effects related to the indistinguishability between multiple hubble volumes manifest? Configuration space isn't over the hubble volumes but over the entirety of the universe, right?
I still see no adequate answer to the question of how you can change P(A|B) if you can't change P(A) or P(B). If every possible mind exists somewhere, and if all that matters about a mind is that it exists somewhere, then no actions make any difference to what matters.
The idea is that you can't change whether a mind exists but you can, possibly, change how much of it exists, or perhaps, how much of different futures it has. By multiply instantiating it? I guess so. It doesn't seem to make much sense, but if I don't presume something like this, I have to weight Boltzmann brains the same as myself.
I'm not trying to rest this argument on the details of the anthropics. Something more along the lines of - in a Big World, I don't have to worry as much about creating diversity or giving possibilities a chance to exist, rel...
Eliezer, it seems you are just expressing the usual intuition against the the "repugnant conclusion", that as long as the universe has a lot more creatures than are on Earth now, having even more creatures can't be very important relative to each one's quality of life.
But in technical terms if you can talk about how much of a mind exists, and can promote more of one kind of mind relative to another, then you can talk about how much they all exist, and can want to promote more minds existing to a larger degree.
Well, this is morality we're talking about, right? So in that case we should ask ourselves what we want.
Let's say that there are already 10^10^20 people out there, and you're suddenly blessed with a thousand times the resources. Would you rather have 10^(10^20 + 3) people in existence, or raise the standard of living by a factor of a thousand?
To look at it another way, let's say that you recently glanced up out of the corner of your eye and saw a dust speck. I have a thousand units of resource. Would you prefer that I simulate a thousand different versions of Robin who saw the dust speck in slightly different locations in a 10 x 10 x 10 grid, or would you rather have a thousand times as much money?
For me, the value of creating new existences is linked to their diversity; as you create more people, you run out of diversity, and so it becomes more important to create the best people rather than to create new people.
Suppose that Earth were the only planet, the only branch, and the only region in all of existence. Then we might want to have mathematicians share all possible developments with each other, in order to prevent them from duplicating each other's work and let them prove...
"So in that case we should ask ourselves what we want."
Eliezer,
The standard problem is that people have incoherent preferences over various population scenarios. They prefer to substantially increase the population in exchange for a small change in QOL, but they reject the result of many such tradeoffs in sequence. Critical-level views, or ones that weight both QOL and total independently, all fail at resolution.
Carl is right; this is a minefield in terms of misleading intuitions. Also, there is already a substantial philosophy literature dealing with it; best to start with what they've learned.
Eliezer:
Vladimir, many of these anthropic-sounding questions can also translate directly into "What should I expect to see happen to me, in situations where there are a billion X-potentially-mes and one Y-potentially-mes?" If X is a kind of me, I should almost certainly expect to see X; if not, I should expect to see Y. I cannot quite manage to bring myself to dispense with the question "What should I expect to see happen next?" or, even worse, "Why am I seeing something so orderly rather than chaotic?" For example, saying &qu...
I'm familiar with Parfit's Repugnant Conclusion, and was actually planning to do a post on it at some point or another, because I took one look and said "Isn't that just scope insensitivity?" But I also automatically translated the problem into Small World terms so that new people were actually being brought into existence; and, in retrospect, even then, visualized it in terms of a number of people small enough that they could have reasonably unique experiences (that is, not a thousand copies of Robin Hanson looking at a dust speck in slightly different places).
With those provisos in place, the Repugnant Conclusion is straightforwardly "repugnant" only because of scope insensitivity. By specification, each new birth is something to celebrate rather than to regret - it can't be an existence just marginally good enough to avoid mercy-killing after being born, with the disutility of the death taken into account. It has to be an existence containing enough joys to outweigh any sorrows, so that we celebrate its birth. If each new birth is something to celebrate, then the "repugnance" of the Repugnant Conclusion is just because we're tossing the thousand...
I'm just incredibly skeptical of attempts to do moral reasoning by invoking exotic metaphysical considerations such as anthropics, even if one is confident that ultimately one will have to do so. Human rationality has enough trouble dealing with science. It's nice that we seen to be able to do better than that, but THIS MUCH better? REALLY? I think that there are terribly strong biases towards deciding that "it all adds up to normality" involved here, even when it's not clear what 'normality' means. When one doesn't decide that, it seems that the tendency is to decide that it all adds up to some cliche, which seems VERY unlikely. I'm also not at all sure how certain we should be of a big universe, but personally I don't feel very confident of it. I'd say it's the way to bet, but not at what odds it remains the way to bet. I rarely find myself in practical situations where my actions would be different if I had some particular metaphysical belief rather than another, though it does come up and have some influence on e.g. my thoughts on vegetarianism.
I confessed myself confused! Really, I did! But even being confused, I've got to update as best I can. In a sufficiently large universe, I care more about better lives and less about creating more people. Is that really so complicated?
You might be interested in the last section of Motion Mountain, the free online physics textbook. It presents absolute limits for various measures of the universe, derived from quantum mechanics and general relativity. It appears that we live in a finite universe, though all of this stuff is pretty speculative.
I find it suspicious that people's preferences over population, lifespan, standard of living, and diversity seem to be "kinked" near their familiar world. A world with 1% of the population, standard of living, lifespan, or diversity of their own world seems to most a terrible travesty, almost a horror, while a world with 100 times as much of one of these factors seems to them at most a small gain, hardly worth mentioning. I suspect a serious status quo bias.
Robin,
Some brute preferences and values may be inculcated by connected social processes. Social psychology seems to point to flexible moral learning among young people (e.g. developing strong moral feelings about ritual purity as one's culture defines it through early exposure to adults reacting in the prescribed ways). Sexual psychology seems to show similar effects: there is a dizzying variety of learned sexual fetishes, and they tend to be culturally laden and connected to the experiences of today, but that doesn't make them wrong. Moral education dedic...
Robin, I think I'm being consistent in caring about lifespan, standard of living, and diversity while not caring about population. (Diversity will look like concern for population but it will run into diminishing returns; still, if our Earth were the only civilization, then indeed there would be lots of experiences as-yet unrealized and the diversity motive would be strong. In other words, I'd consistently want a hundred times as much diversity as what we see in the immediate world around us.)
Suppose that instead of talking about people, we were just tal...
Not sure global diversity, as opposed to local diversity or just sheer quantity of experience, is the only reason I prefer there to be more (happy) people.
Since I probably don't care about abstract existence of music, but about experiencing music, this is correct for music for the wrong reasons, namely limited attention bandwidth. Analogy seduces, but doesn't seem to carry over...
in a Big World, I don't have to worry as much about creating diversity or giving possibilities a chance to exist, relative to how much I worry about average quality of life for sentients.
Can't say fairer than that.
Eliezer, given the proportion of your selves that get run over every day, have you stopped crossing the road? Leaving the house?
Or do you just make sure that you improve the standard of living for everyone in your Hubble Sphere by a certain number of utilons and call it a good day on average?
Eliezer, you know perfectly well that the theory you are suggesting here leads to circular preferences. On another occasion when this came up, I started to indicate the path that would show this, and you did not respond. If circular preferences are justified on the grounds that you are confused, then you are justifying those who said that dust specks are preferable to torture.
it seems in some raw intuitive sense, that if the universe is large enough for everyone to exist somewhere, then we should mainly be worried about giving babies nice futures rather than trying to "ensure they get born".
That's an interesting intuition, but one that I don't share. I concur with Steven and Vladimir. The whole point of the classical-utilitarian "Each to count for one and none for more than one" principle is that the identity of the collection of atoms experiencing an emotion is irrelevant. What matters is increasing the num...
I'm finding Eliezer's view attractive, but it does have a few counterintuitive consequences of its own. If we somehow encountered shocking new evidence that MWI, &c. is false and that we live in a small world, would weird people suddenly become much more important? Did Eliezer think (or should he have thought) that weird people are more important before coming to believe in a big world?
I think many value the quality of life of their friends and loved ones more than they value hypothetical far-future abstractions. This has to do with evolution's impact on psychology - and doesn't have much to do with how big the universe is.
Eliezer, whenever you start thinking about people who are completely causally unconnected with us as morally relevant, alarm bells should go off.
What's worse though, is that if your opinion on this is driven by a desire to justify not agreeing with the "repugnant conclusion", it may signify problems with your morality that could annihilate humanity if you give your morality to an AI. The repugnant conclusion requires valuing the bringing into existence of hypothetical people with total utility x by as much as reducing the utility of existing peop...
Eliezer, also consider this: suppose I am a mad scientist trying to decide between making one copy of Eliezer and torturing it for 50 years, or on the other hand, making 1000 copies of Eliezer and torturing them all for 50 years.
The second possibility is much, much worse for you personally. For in the first possibility, you would subjectively have a 50% chance of being tortured. But in the second possibility, you would have a subjective chance of 99.9% of being tortured. This implies that the second possibility is much worse, so creating copies of bad expe...
You shouldn't waste your time figuring out how to act in an expanding multiverse, as opposed to a simple, single and unitary world. The problem of how to act and live even in the latter case is tough enough. Conditioning your choices on the former perspective is trying to think a god, when you're in fact an animal.
Ever since I realized that physics seems to tell us straight out that we live in a Big World, I've become much less focused on creating lots of people, and much more focused on ensuring the welfare of people who are already alive.
I don't like that reasoning. If you create an interesting person here, in our hubble volume, their interestingness can reflect back to you. The other "copies" 10^(10^50) or so light years away will never have anything to do with you.
I noticed you changed units between the average distance of another you and the average distance of another identical universe. That seems rather pointless. A lightyear is only 16 orders of magnitude larger than a meter, and is lost in rounding compared to 10^115 orders of magnitude.
You mentioned a portion of people. I don't think there's any reason to believe that the universe is this big but still finite, and if it is infinite, there's no way to measure a fraction of people. There are infinity people who's lives are worth living and infinity who's lives ...
What I do want for myself, is for the largest possible proportion of my future selves to lead eudaimonic existences, that is, to be happy. This is the "probability" of a good outcome in my expected utility maximization. I'm not concerned with having more of me - really, there are plenty of me already - but I do want most of me to be having fun.
Are you attracted to quantum suicide to win the lottery then? (Put to one side for a moment the consequences for your friends, etc who would have to deal with your passing away)
Do you have any pointer on why you believe so firmly in an infinite universe ? Reading books on physics (from mainstream authors like Stephen Hawking or Christian Magnan, or from less conventional books like Julian Barbour's End of Time) I got the impression that the current consensus is that the universe is finite, expanding, but currently finite. There may be no limit of its size if, as it seems now, the expansion rate is growing - but right now it has a finite size.
And from a purely theoretical point of view, infinity doesn't seem very coherent to me. I...
... But there's no sense crying over every mistake, you just keep on trying till you run out of negentropy.
I'm worried this is just an elaborate justification to not have as many children as possible. But I'm not convinced that I'm obligated to help all other 'beings', of any class or category, instead of merely not harming (most of) them.
I don't think "infinite space" is enough to have infinite copies of me. You'd also need infinite matter, no?
[putting aside "many worlds" for a moment]
Max Tegmark observed that we have three independent reasons to believe we live in a Big World: A universe which is large relative to the space of possibilities. For example, on current physics, the universe appears to be spatially infinite (though I'm not clear on how strongly this is implied by the standard model).
If the universe is spatially infinite, then, on average, we should expect that no more than 10^10^29 meters away is an exact duplicate of you. If you're looking for an exact duplicate of a Hubble volume - an object the size of our observable universe - then you should still on average only need to look 10^10^115 lightyears. (These are numbers based on a highly conservative counting of "physically possible" states, e.g. packing the whole Hubble volume with potential protons at maximum density given by the Pauli Exclusion principle, and then allowing each proton to be present or absent.)
The most popular cosmological theories also call for an "inflationary" scenario in which many different universes would be eternally budding off, our own universe being only one bud. And finally there are the alternative decoherent branches of the grand quantum distribution, aka "many worlds", whose presence is unambiguously implied by the simplest mathematics that fits our quantum experiments.
Ever since I realized that physics seems to tell us straight out that we live in a Big World, I've become much less focused on creating lots of people, and much more focused on ensuring the welfare of people who are already alive.
If your decision to not create a person means that person will never exist at all, then you might, indeed, be moved to create them, for their sakes. But if you're just deciding whether or not to create a new person here, in your own Hubble volume and Everett branch, then it may make sense to have relatively lower populations within each causal volume, living higher qualities of life. It's not like anyone will actually fail to be born on account of that decision - they'll just be born predominantly into regions with higher standards of living.
Am I sure that this statement, that I have just emitted, actually makes sense?
Not really. It dabbles in the dark arts of anthropics, and the Dark Arts don't get much murkier than that. Or to say it without the chaotic inversion: I am stupid with respect to anthropics.
But to apply the test of simplifiability - it seems in some raw intuitive sense, that if the universe is large enough for everyone to exist somewhere, then we should mainly be worried about giving babies nice futures rather than trying to "ensure they get born".
Imagine taking a survey of the whole universe. Every plausible baby gets a little checkmark in the "exists" box - everyone is born somewhere. In fact, the total population count for each baby is something-or-other, some large number that may or may not be "infinite" -
(I should mention at this point that I am an infinite set atheist, and my main hope for being able to maintain this in the face of a spatially infinite universe is to suggest that identical Hubble volumes add in the same way as any other identical configuration of particles. So in this case the universe would be exponentially large, the size of the branched decoherent distribution, but the spatial infinity would just fold into that very large but finite object. And I could still be an infinite set atheist. I am not a physicist so my fond hope may be ruled out for some reason of which I am not aware.)
- so the first question, anthropically speaking, is whether multiple realizations of the exact same physical process count as more than one person. Let's say you've got an upload running on a computer. If you look inside the computer and realize that it contains triply redundant processors running in exact synchrony, is that three people or one person? How about if the processor is a flat sheet - if that sheet is twice as thick, is there twice as much person inside it? If we split the sheet and put it back together again without desynchronizing it, have we created a person and killed them?
I suppose the answer could be yes; I have confessed myself stupid about anthropics.
Still: I, as I sit here, am frantically branching into exponentially vast numbers of quantum worlds. I've come to terms with that. It all adds up to normality, after all.
But I don't see myself as having a little utility counter that frantically increases at an exponential rate, just from my sitting here and splitting. The thought of splitting at a faster rate does not much appeal to me, even if such a thing could be arranged.
What I do want for myself, is for the largest possible proportion of my future selves to lead eudaimonic existences, that is, to be happy. This is the "probability" of a good outcome in my expected utility maximization. I'm not concerned with having more of me - really, there are plenty of me already - but I do want most of me to be having fun.
I'm not sure whether or not there exists an imperative for moral civilizations to try to create lots of happy people so as to ensure that most babies born will be happy. But suppose that you started off with 1 baby existing in unhappy regions for every 999 babies existing in happy regions. Would it make sense for the happy regions to create ten times as many babies leading one-tenth the quality of life, so that the universe was "99.99% sorta happy and 0.01% unhappy" instead of "99.9% really happy and 0.1% unhappy"? On the face of it, I'd have to answer "No." (Though it depends on how unhappy the unhappy regions are; and if we start off with the universe mostly unhappy, well, that's a pretty unpleasant possibility...)
But on the whole, it looks to me like if we decide to implement a policy of routinely killing off citizens to replace them with happier babies, or if we lower standards of living to create more people, then we aren't giving the "gift of existence" to babies who wouldn't otherwise have it. We're just setting up the universe to contain the same babies, born predominantly into regions where they lead short lifespans not containing much happiness.
Once someone has been born into your Hubble volume and your Everett branch, you can't undo that; it becomes the responsibility of your region of existence to give them a happy future. You can't hand them back by killing them. That just makes their average lifespan shorter.
It seems to me that in a Big World, the people who already exist in your region have a much stronger claim on your charity than babies who have not yet been born into your region in particular.
And that's why, when there is research to be done, I do it not just for all the future babies who will be born - but, yes, for the people who already exist in our local region, who are already our responsibility.
For the good of all of us, except the ones who are dead.