All of yttrium's Comments + Replies

yttrium00

The rock on top of the computer wouldn't count into the "amount doing the computation". Apart from that, I agree that weight shouldn't be the right quantity. A better way to formulate what I am getting at would maybe be that "probability of being a mind is an extensive physical quantity". I have updated the post accordingly.

Regarding your second paragraph: No, the TV screens aren't part of the matter that does the computation.

0ThisSpaceAvailable
Of course, now your introductory phrase "TL;DR by lavalamp:" doesn't make sense without the knowledge that "lavalamp" is a proper noun.
yttrium00

I still think that the scenario you describe is not obviously and according to all philosophical intuitions the same as one where both minds exist in parallel.

Also, the expected number of paperclips (what you describe) is not equal to the expected experienced number of paperclips (what would be the relevant weighting for my post). After all, if A involves killing the maximizer before generating any paperclip, the paperclip-maximizer would choose A, while the experienced-paperclip-maximizer would choose B. The probability of experiencing paperclips would be obviously different from the probability of paperclips existing, when choosing A.

0Manfred
If you make robots that maximize your proposed "subjective experience" (proportional to mass) and I make robots that maximize some totally different "subjective experience" (how about proportional to mass squared!), all of those robots will act exactly like one would expect - the linear-experience maximizers would maximize linear-experience, the squared-experience maximizers would maximize squared-experience. Because anything can be putt into a utility function, it's very hard to talk about subjective experience by referencing utility functions. We want to reduce "subjective experience" to some kind of behavior that we don't have to put into the utility function by hand. In the Sleeping Beauty problem, we can start with an agent that selfishly values some payoff (say, candy bars), with no specific weighting on the number of copies, and no explicit terms for "subjective experience." But then we put it in an unusual situation, and it turns out that the optimum betting strategy is the one where it gives more weight to world where there are more copies of it. That kind o behavior is what indicate to me that there's something going on with subjective experience.
yttrium00

If I understand you correctly, your scenario is different from the one I had in mind in that I'd have both computers instantiated at the same time (I've clarified that in the post), and then considering the relative probability of experiencing what the 1 kg computer experiences vs experiencing what the 2 kg computer experiences. It seems like one could adapt your scenario by creating a 1 kg and a 2 kg computer at the same time, offering both of them a choice between A and B, and then generating 5 paperclips if the 1 kg computer chooses A and (additionally)... (read more)

0Manfred
Right, that's why I split them up into different worlds, so that they don't get any utility from paperclips created by the other paperclip maximizer. Not true - see the Sleeping Beauty problem.
yttrium00

Depending on the rest of your utility distribution, that is probably true. Note, however, that an additional 10^6 utility in the right half of the utility function will change the median outcome of your "life": If 10^6 is larger than all the other utility you could ever receive, and you add a 49 % chance of receiving it, the 50th percentile utility after that should look like the 98th percentile utility before.

0CuSithBell
Could you rephrase this somehow? I'm not understanding it. If you actually won the bet and got the extra utility, your median expected utility would be higher, but you wouldn't take the bet, because your median expected utility is lower if you do.
yttrium00

I want that it is possible to have a very bad outcome: If I can play a lottery that has 1 utilium cost, 10^7 payoff and a winning chance of 10^-6, and if I can play this lottery enough times, I want to play it.

0CuSithBell
"Enough times" to make it >50% likely that you will win, yes? Why is this the correct cutoff point?
yttrium00

The problem seems to vanish if you don't ask "What is the expectation value of utility for this decision, if I do X", but rather "If I changed my mental algorithms so that they do X in situations like this all the time, what utility would I plausibly accumulate over the course of my entire life?" ("How much utility do I get at the 50th percentile of the utility probability distribution?") This would have the following results:

  • For the limit case of decisions where all possible outcomes happen infinitely often during your life

... (read more)
yttrium00

Seconding, though I originally meant the second question. I am also not sure whether you are referring to "conceptual analysis" (then the second question would be clear to me) or "nailing down a proper (more proper) definition before arguing about something" (then it would not).

yttrium40

Now, for concepts like "democracy", the unnatural approach does prove to be worse.

Why?

4Tyrrell_McAllister
Are you asking "What causes it to be worse?" or "Why do you think that it is worse?"?
yttrium00

Do you know similarly extraordinary claims made by respected institutions from the past? How often did they turn out to be right, how often did they turn out to be wrong?