Comment author: ThisSpaceAvailable 16 February 2014 07:08:09AM 0 points [-]

If I take a 1 kg computer and put a 1 kg rock on top of it, do I now have 2 kg computer? Are you only counting the "essential" weight, and if so, how do you define "essential"? What if I have a 100 kg computer, of which 1 kg is running a sentient program, and 99 kg is playing Solitaire? How do you decide how much of the computations are part of the sentience?

What if we run a computer, record its state at each clock cycle, and broadcast those states to a billion TV screens? Do we now weight the computer nine orders of magnitude more than we would otherwise?

Comment author: yttrium 28 July 2014 07:10:08AM *  0 points [-]

The rock on top of the computer wouldn't count into the "amount doing the computation". Apart from that, I agree that weight shouldn't be the right quantity. A better way to formulate what I am getting at would maybe be that "probability of being a mind is an extensive physical quantity". I have updated the post accordingly.

Regarding your second paragraph: No, the TV screens aren't part of the matter that does the computation.

Comment author: lavalamp 12 February 2014 01:11:19AM 0 points [-]

I think you're getting downvoted for your TL;DR, which is extremely difficult to parse. May I suggest:

TL;DR: Treating "computers running minds" as discrete objects might cause a paradox in probability calculations that involve self-location.

Comment author: yttrium 12 February 2014 08:06:25AM 1 point [-]

Changed it, that sounds better.

Comment author: Manfred 11 February 2014 08:33:54PM 0 points [-]

To the contrary, if the maximizer cares about the experienced number of paperclips, and each of the maximizers only learns about the paperclips generated by it's own choice regarding the given options

Right, that's why I split them up into different worlds, so that they don't get any utility from paperclips created by the other paperclip maximizer.

how much value to give each of the computer's votes is decided by the operators of the experiment, not the computers

Not true - see the Sleeping Beauty problem.

Comment author: yttrium 12 February 2014 07:55:58AM *  0 points [-]

I still think that the scenario you describe is not obviously and according to all philosophical intuitions the same as one where both minds exist in parallel.

Also, the expected number of paperclips (what you describe) is not equal to the expected experienced number of paperclips (what would be the relevant weighting for my post). After all, if A involves killing the maximizer before generating any paperclip, the paperclip-maximizer would choose A, while the experienced-paperclip-maximizer would choose B. The probability of experiencing paperclips would be obviously different from the probability of paperclips existing, when choosing A.

Comment author: Manfred 11 February 2014 04:43:19PM 2 points [-]

Suppose there's a paperclip maximizer that could either be running on a 1 kg computer or a 2 kg computer - say the humans flipped a coin when picking which computer to run it on.

Since the computations are the same, the paperclip maximizer doesn't know whether it's 1 kg or 2 kg until I tell it. But before I tell it, I offer the paperclip maximizer a choice between options A and B: A results in 5 paperclips if it's 1 kg and 0 otherwise, B results in 4 paperclips if it's 2 kg and 0 otherwise.

It seems like the paperclip-maximizing strategy is to give equal weight (ha) to being 1 kg and 2 kg, and pick A.

Comment author: yttrium 11 February 2014 07:25:28PM *  0 points [-]

If I understand you correctly, your scenario is different from the one I had in mind in that I'd have both computers instantiated at the same time (I've clarified that in the post), and then considering the relative probability of experiencing what the 1 kg computer experiences vs experiencing what the 2 kg computer experiences. It seems like one could adapt your scenario by creating a 1 kg and a 2 kg computer at the same time, offering both of them a choice between A and B, and then generating 5 paperclips if the 1 kg computer chooses A and (additionally) 4 paperclips if the 2 kg computer chooses B. Then, the right choice for both systems (who still can't distinguish themselves from each other) would still be A, but I don't see how this is related to the relative weight of both maximizer's experiences - after all, how much value to give each of the computer's votes is decided by the operators of the experiment, not the computers. To the contrary, if the maximizer cares about the experienced number of paperclips, and each of the maximizers only learns about the paperclips generated by it's own choice regarding the given options, I'd still say that the maximizer should choose B.

Comment author: APMason 04 June 2012 12:49:34PM 4 points [-]

What happens if you're using this method and you're offered a gamble where you have a 49% chance of gaining 1000000utils and a 51% chance of losing 5utils (if you don't take the deal you gain and lose nothing). Isn't the "typical outcome" here a loss, even though we might really really want to take the gamble? Or have I misunderstood what you propose?

Comment author: yttrium 05 June 2012 04:38:49PM 0 points [-]

Depending on the rest of your utility distribution, that is probably true. Note, however, that an additional 10^6 utility in the right half of the utility function will change the median outcome of your "life": If 10^6 is larger than all the other utility you could ever receive, and you add a 49 % chance of receiving it, the 50th percentile utility after that should look like the 98th percentile utility before.

Comment author: CarlShulman 04 June 2012 10:11:46PM *  1 point [-]

A bounded utility function, on which increasing years of happy life (or money, or whatever) give only finite utility in the infinite limit, does not favor taking vanishing probabilities of immense payoffs. It also preserves normal expected utility calculations so that you can think about 90th percentile and 10th percentile, and lets you prefer higher payoffs in probable cases.

Basically, this "median outcome" heuristic looks like just a lossy compression of a bounded utility function's choice outputs, subject to new objections like APMason's. Why not just go with the bounded utility function?

Comment author: yttrium 05 June 2012 04:34:23PM 0 points [-]

I want that it is possible to have a very bad outcome: If I can play a lottery that has 1 utilium cost, 10^7 payoff and a winning chance of 10^-6, and if I can play this lottery enough times, I want to play it.

Comment author: spuckblase 15 December 2011 03:47:12PM 1 point [-]

Ok, so who's the other one living in Berlin?

Comment author: yttrium 28 May 2012 09:04:00AM *  0 points [-]

Me too!

Comment author: yttrium 08 January 2012 08:26:46PM *  0 points [-]

The problem seems to vanish if you don't ask "What is the expectation value of utility for this decision, if I do X", but rather "If I changed my mental algorithms so that they do X in situations like this all the time, what utility would I plausibly accumulate over the course of my entire life?" ("How much utility do I get at the 50th percentile of the utility probability distribution?") This would have the following results:

  • For the limit case of decisions where all possible outcomes happen infinitely often during your lifetime, you would decide exactly as if you wanted to maximize expectation value in an individual case.

  • You would not decide to give money to Pascals' mugger, if you don't expect that there are many fundamentally different scenarios which a mugger could tell you about: If you give a 5 % chance to the scenario described by Pascals mugger and believe that this is the only scenario which, if true, would make you give 5 $ to some person, you would not give the money away.

  • In contrast, if you believe that there are 50 different mugging scenarios which people will tell you during your life to pascal-mug you, and you assign an independent 5 % chance to all of them, you would give money to a mugger (and expect this to pay off occasionally).

Comment author: dlthomas 29 November 2011 05:47:51PM 4 points [-]

Both seem to be interesting questions. Answer both?

Comment author: yttrium 30 November 2011 08:52:04AM 0 points [-]

Seconding, though I originally meant the second question. I am also not sure whether you are referring to "conceptual analysis" (then the second question would be clear to me) or "nailing down a proper (more proper) definition before arguing about something" (then it would not).

Comment author: Tyrrell_McAllister 28 November 2011 04:22:41PM *  13 points [-]

In my last post, I showed that the brain does not encode concepts in terms of necessary and sufficient conditions. So, any philosophical practice which assumes this — as much of 20th century conceptual analysis seems to do — is misguided.

This argument must be missing something crucial, because it fails to account for why the necessary-and-sufficient approach is so fantastically useful in mathematics. Mathematics deals with human concepts. Many of these concepts are very likely not stored in the brain as necessary and sufficient conditions. (Concepts learned in a formal setting might be stored that way, but there's little reason to think that a common concept like "triangle" is for most people.) And yet it proved incredibly fruitful to recast these concepts in terms of necessary and sufficient conditions.

In the case of mathematics, it turns out to be worthwhile to think about concepts in the decidedly unnatural mode of necessary and sufficient conditions. One might reasonably have hoped that the same admittedly unnatural mode would prove similarly worthwhile for concepts like "democracy". After all, unnatural doesn't necessarily mean worse. Now, for concepts like "democracy", the unnatural approach does prove to be worse. But it can't be simply because the approach was unnatural.

Comment author: yttrium 29 November 2011 05:26:40PM *  3 points [-]

Now, for concepts like "democracy", the unnatural approach does prove to be worse.

Why?

View more: Next