This post is not about many worlds. It is somewhat disjointed, but builds to a single point.
If an AI was asked today how many human individuals populate this planet, it may not return a number the several-billions range. In fact I’d be willing to bet it’d return a number in the tens of thousands, with the caveat that the individuals vary wildly in measure.
I agree with Robin Hanson that if two instances of me exist, and one is terminated, I didn’t die, I simply got smaller.
In 1995 Robert Sapolsky wrote in Ego Boundaries “My students usually come with ego boundaries like exoskeletons. […] They want their rituals newly minted and shared horizontally within their age group, not vertically over time,” whereas in older societies “needs transcend individual rights to a bounded ego, and people in traditional communities are named and raised as successive incarnations. In such societies, Abraham always lives 900 years--he simply finds a new body to inhabit now and then. ”
Ego boundaries may be more rigid now, but that doesn’t make people more unique. If anything, people have become more like each other. Memes are powerful shapers of mental agents, and as technology allows memes to breed and compete more freely the most viral ones spread through the species.
Acausal trade allows for amazing efficiencies, not merely on a personal level but also via nationalism and religion. People executing strong acausal trading routines will out-compete those who don’t.
Timeless Decision Theory proscribes making decisions as if choosing the outcome for all actors sufficiently like yourself across all worlds. As competition narrows the field of memeplexes to a handful of powerful and virulent ubermemes, and those memeplexes influence the structure and strength of individual’s mental agents in similar ways, people become more like each other. In so doing they are choosing *as if* a single entity more and more effectively. To an outside observer, there may be very little to differentiate two such humans from each other.
Therefore it may be wrong to think of oneself as a singular person. I am not just me – I am also effectively everyone who is sufficiently like me. It’s been argued that there are only seven stories, and every story can be thought of as an elaboration of one of these. It seems likely there are only a few thousand differentiable people, and everyone is simply one of these with some flare.
If we think of people in these terms, certain behaviors make more sense. Home-schooling is looked down on because institutional schools are about making other people into us. Suicide is considered more sinful than killing outsiders because suicide *always* reduces the size of the Meta-Person that the suicidee belonged to. Argument and rhetoric isn’t just a complete waste of your free time, it’s also an attempt to make Meta-Me larger, and Meta-SomeoneElse smaller. Art finally makes sense.
Added Bonus: You no longer have to have many children to exist. You can instead work on enlarging your Meta-Self’s measure.
I don't wanna hijack this, but this post presents a fairly logical conclusion of a basic position I see on LW all the time.
Many people seem to identify themselves with their beliefs, memes and/or preferences. I find that very strange, but maybe my values are unusual in this regard.
For example, I don't want to see my preferences be fulfilled for their own sake, but so that I can experience them being fulfilled. If someone made a clone of me who wanted exactly the same things and succeeded at getting them, I wouldn't feel better about this. The very idea that my preferences have any meaning after my death seems absurd to me. Trying to affect the world in a way I won't personally experience is just as silly.
TDT might be a neat trick, but fundamentally I don't see how it could possible apply to me. There's exactly one entity like me, i.e. one that has this phenomenal experience. In other words, I care little about copying the data and a lot about this one running instance.
Is this really a (fairly extreme) value dissonance or am I misunderstanding something?
If your values are, in fact, as you've described them, then it's a fairly extreme value dissonance.
Just to make this maximally concrete: if you were given a magic button that, if pressed, caused the world to end five minutes after your death, would you press the button?
If you're serious about thinking that effects on the world you won't personally experience are irrelevant to you, then presumably your answer is to shrug indifferently... there's no reason to prefer pressing the button to not pressing it, or vice-versa, since neither activity has any effect... (read more)