Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

earthwormchuck163 comments on Pascal's Muggle: Infinitesimal Priors and Strong Evidence - Less Wrong

43 Post author: Eliezer_Yudkowsky 08 May 2013 12:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (404)

You are viewing a single comment's thread.

Comment author: earthwormchuck163 06 May 2013 08:17:45AM *  25 points [-]

Mugger: Give me five dollars, and I'll save 3↑↑↑3 lives using my Matrix Powers.

Me: I'm not sure about that.

Mugger: So then, you think the probability I'm telling the truth is on the order of 1/3↑↑↑3?

Me: Actually no. I'm just not sure I care as much about your 3↑↑↑3 simulated people as much as you think I do.

Mugger: "This should be good."

Me: There's only something like n=10^10 neurons in a human brain, and the number of possible states of a human brain exponential in n. This is stupidly tiny compared to 3↑↑↑3, so most of the lives you're saving will be heavily duplicated. I'm not really sure that I care about duplicates that much.

Mugger: Well I didn't say they would all be humans. Haven't you read enough Sci-Fi to know that you should care about all possible sentient life?

Me: Of course. But the same sort of reasoning implies that, either there are a lot of duplicates, or else most of the people you are talking about are incomprehensibly large, since there aren't that many small Turing machines to go around. And it's not at all obvious to me that you can describe arbitrarily large minds whose existence I should care about without using up a lot of complexity. More generally, I can't see any way to describe worlds which I care about to a degree that vastly outgrows their complexity. My values are complicated.

Comment author: lukeprog 06 May 2013 03:53:32PM 8 points [-]

I'm not really sure that I care about duplicates that much.

Bostrom would probably try to argue that you do. See Bostrom (2006).

Comment author: TabAtkins 09 May 2013 02:31:09AM 6 points [-]

Am I crazy, or does Bostrom's argument in that paper fall flat almost immediately, based on a bad moral argument?

His first, and seemingly most compelling, argument for Duplication over Unification is that, assuming an infinite universe, it's certain (with probability 1) that there is already an identical portion of the universe where you're torturing the person in front of you. Given Unification, it's meaningless to distinguish between that portion and this portion, given their physical identicalness, so torturing the person is morally blameless, as you're not increasing the number of unique observers being tortured. Duplication makes the two instances of the person distinct due to their differing spatial locations, even if every other physical and mental aspect is identical, so torturing is still adding to the suffering in the universe.

However, you can flip this over trivially and come to a terrible conclusion. If Duplication is true, you merely have to simulate a person until they experience a moment of pure hedonic bliss, in some ethically correct manner that everyone agrees is morally good to experience and enjoy. Then, copy the fragment of the simulation covering the experiencing of that emotion, and duplicate it endlessly. Each duplicate is distinct, and so you're increasing the amount of joy in the universe every time you make a copy. It would be a net win, in fact, if you killed every human and replaced the earth with a computer doing nothing but running copies of that one person experiencing a moment of bliss. Unification takes care of this, by noting that duplicating someone adds, at most, a single bit of information to the universe, so spamming the universe with copies of the happy moment counts either the same as the single experience, or at most a trivial amount more.

Am I thinking wrong here?

Comment author: Tehom 20 May 2013 06:25:44PM 4 points [-]

However, you can flip this over trivially and come to a terrible conclusion. If Duplication is true, you merely have to simulate a person until they experience a moment of pure hedonic bliss, in some ethically correct manner that everyone agrees is morally good to experience and enjoy. Then, copy the fragment of the simulation covering the experiencing of that emotion, and duplicate it endlessly.

True just if your summum bonum is exactly an aggregate of moments of happiness experienced.

I take the position that it is not.

I don't think one even has to resort to a position like "only one copy counts".

Comment author: TabAtkins 03 June 2013 10:14:04PM 0 points [-]

True, but that's then striking more at the heart of Bostrom's argument, rather than my counter-argument, which was just flipping Bostrom around. (Unless your summum malum is significantly different, such that duplicate tortures and duplicate good-things-equivalent-to-torture-in-emotional-effect still sum differently?)

Comment author: Pentashagon 14 May 2013 10:31:38PM 0 points [-]

His first, and seemingly most compelling, argument for Duplication over Unification is that, assuming an infinite universe, it's certain (with probability 1) that there is already an identical portion of the universe where you're torturing the person in front of you. Given Unification, it's meaningless to distinguish between that portion and this portion, given their physical identicalness, so torturing the person is morally blameless, as you're not increasing the number of unique observers being tortured.

I'd argue that the torture portion is not identical to the not-torture portion and that the difference is caused by at least one event in the common prior history of both portions of the universe where they diverged. Unification only makes counterfactual worlds real; it does not cause every agent to experience every counterfactual world. Agents are differentiated by the choices they make and agents who perform torture are not the same agents as those who abstain from torture. The difference can be made arbitrarily small, for instance by choosing an agent with a 50% probability of committing torture based on the outcome of a quantum coin flip, but the moral question in that case is why an agent would choose to become 50% likely to commit torture in the first place. Some counterfactual agents will choose to become 50% likely to commit torture, but they will be very different than the agents who are 1% likely to commit torture.

Comment author: TabAtkins 03 June 2013 10:22:11PM 0 points [-]

I think you're interpreting Bostrom slightly wrong. You seem to be reading his argument (or perhaps just my short distillation of it) as arguing that you're not currently torturing someone, but there's an identical section of the universe elsewhere where you are torturing someone, so you might as well start torturing now.

As you note, that's contradictory - if you're not currently torturing, then your section of the universe must not be identical to the section where the you-copy is torturing.

Instead, assume that you are currently torturing someone. Bostrom's argument is that you're not making the universe worse, because there's a you-copy which is torturing an identical person elsewhere in the universe. At most one of your copies is capable of taking blame for this; the rest are just running the same calculations "a second time", so to say. (Or at least, that's what he's arguing that Unification would say, and using this as a reason to reject it and turn to Duplication, so each copy is morally culpable for causing new suffering.)

Comment author: Benja 06 May 2013 02:33:42PM 7 points [-]

I think it not unlikely that if we have a successful intelligence explosion and subsequently discover a way to build something 4^^^^4-sized, then we will figure out a way to grow into it, one step at a time. This 4^^^^4-sized supertranshuman mind then should be able to discriminate "interesting" from "boring" 3^^^3-sized things. If you could convince the 4^^^^4-sized thing to write down a list of all nonboring 3^^^3-sized things in its spare time, then you would have a formal way to say what an "interesting 3^^^3-sized thing" is, with description length (the description length of humanity = the description length of our actual universe) + (the additional description length to give humanity access to a 4^^^^4-sized computer -- which isn't much because access to a universal Turing machine would do the job and more).

Thus, I don't think that it needs a 3^^^3-sized description length to pick out interesting 3^^^3-sized minds.

Comment author: DanielLC 08 May 2013 01:07:52AM *  3 points [-]

Me: Actually no. I'm just not sure I care as much about your 3↑↑↑3 simulated people as much as you think I do.

Mugger: So then, you think the probability that you should care as much about my 3↑↑↑3 simulated people as I thought you did is on the order of 1/3↑↑↑3?

Comment author: earthwormchuck163 08 May 2013 07:17:20AM 3 points [-]

After thinking about it a bit more I decided that I actually do care about simulated people almost exactly as the mugger thought I did.

Comment author: brainoil 06 May 2013 12:54:36PM *  2 points [-]

I'm not really sure that I care about duplicates that much.

Didn't you feel sad when Yoona-939 was terminated, or wish all happiness for Sonmi-451?

Comment author: Luke_A_Somers 09 May 2013 10:08:52PM *  4 points [-]

All the other Yoona-939s were fine, right? And that Yoona-939 was terminated quickly enough to prevent divergence, wasn't she?

(my point is, you're making it seem like you're breaking the degeneracy by labeling them. But their being identical is deep)

Comment author: Eliezer_Yudkowsky 09 May 2013 10:37:34PM 2 points [-]

But now she's... you know... now she's... (wipes away tears) slightly less real.

Comment author: Luke_A_Somers 10 May 2013 12:45:57PM 3 points [-]

You hit pretty strong diminishing returns on existence once you've hit the 'at least one copy' point.

Comment author: Jack 10 May 2013 12:51:06AM -1 points [-]

Clones aren't duplicates. They may have started out as duplicates but they were not by the time the reader is introduced to them.

Comment author: abramdemski 06 May 2013 10:31:07AM 0 points [-]

I agree with most of this. I think it is plausible that the value of a scenario is in some sense upper-bounded by its description length, so that we need on the order of googolplex bits to describe a googolplex of value.

We can separately ask if this solves the problem. One may want a theory which solves the problem regardless of utility function; or, aiming lower, one may be satisfied to find a class of utility functions which seem to capture human intuition well enough.

Comment author: abramdemski 07 May 2013 03:06:11AM 2 points [-]

Upper-bounding utility by description complexity doesn't actually capture the intuition, since a simple universe could give rise to many complex minds.