Wiki Contributions

Comments

Sorted by
yttrium00

The rock on top of the computer wouldn't count into the "amount doing the computation". Apart from that, I agree that weight shouldn't be the right quantity. A better way to formulate what I am getting at would maybe be that "probability of being a mind is an extensive physical quantity". I have updated the post accordingly.

Regarding your second paragraph: No, the TV screens aren't part of the matter that does the computation.

yttrium00

I still think that the scenario you describe is not obviously and according to all philosophical intuitions the same as one where both minds exist in parallel.

Also, the expected number of paperclips (what you describe) is not equal to the expected experienced number of paperclips (what would be the relevant weighting for my post). After all, if A involves killing the maximizer before generating any paperclip, the paperclip-maximizer would choose A, while the experienced-paperclip-maximizer would choose B. The probability of experiencing paperclips would be obviously different from the probability of paperclips existing, when choosing A.

yttrium00

If I understand you correctly, your scenario is different from the one I had in mind in that I'd have both computers instantiated at the same time (I've clarified that in the post), and then considering the relative probability of experiencing what the 1 kg computer experiences vs experiencing what the 2 kg computer experiences. It seems like one could adapt your scenario by creating a 1 kg and a 2 kg computer at the same time, offering both of them a choice between A and B, and then generating 5 paperclips if the 1 kg computer chooses A and (additionally) 4 paperclips if the 2 kg computer chooses B. Then, the right choice for both systems (who still can't distinguish themselves from each other) would still be A, but I don't see how this is related to the relative weight of both maximizer's experiences - after all, how much value to give each of the computer's votes is decided by the operators of the experiment, not the computers. To the contrary, if the maximizer cares about the experienced number of paperclips, and each of the maximizers only learns about the paperclips generated by it's own choice regarding the given options, I'd still say that the maximizer should choose B.

yttrium00

Depending on the rest of your utility distribution, that is probably true. Note, however, that an additional 10^6 utility in the right half of the utility function will change the median outcome of your "life": If 10^6 is larger than all the other utility you could ever receive, and you add a 49 % chance of receiving it, the 50th percentile utility after that should look like the 98th percentile utility before.

yttrium00

I want that it is possible to have a very bad outcome: If I can play a lottery that has 1 utilium cost, 10^7 payoff and a winning chance of 10^-6, and if I can play this lottery enough times, I want to play it.

yttrium00

The problem seems to vanish if you don't ask "What is the expectation value of utility for this decision, if I do X", but rather "If I changed my mental algorithms so that they do X in situations like this all the time, what utility would I plausibly accumulate over the course of my entire life?" ("How much utility do I get at the 50th percentile of the utility probability distribution?") This would have the following results:

  • For the limit case of decisions where all possible outcomes happen infinitely often during your lifetime, you would decide exactly as if you wanted to maximize expectation value in an individual case.

  • You would not decide to give money to Pascals' mugger, if you don't expect that there are many fundamentally different scenarios which a mugger could tell you about: If you give a 5 % chance to the scenario described by Pascals mugger and believe that this is the only scenario which, if true, would make you give 5 $ to some person, you would not give the money away.

  • In contrast, if you believe that there are 50 different mugging scenarios which people will tell you during your life to pascal-mug you, and you assign an independent 5 % chance to all of them, you would give money to a mugger (and expect this to pay off occasionally).

yttrium00

Seconding, though I originally meant the second question. I am also not sure whether you are referring to "conceptual analysis" (then the second question would be clear to me) or "nailing down a proper (more proper) definition before arguing about something" (then it would not).

yttrium40

Now, for concepts like "democracy", the unnatural approach does prove to be worse.

Why?

Load More