Comment author: ThisSpaceAvailable 16 February 2014 07:08:09AM 0 points [-]

If I take a 1 kg computer and put a 1 kg rock on top of it, do I now have 2 kg computer? Are you only counting the "essential" weight, and if so, how do you define "essential"? What if I have a 100 kg computer, of which 1 kg is running a sentient program, and 99 kg is playing Solitaire? How do you decide how much of the computations are part of the sentience?

What if we run a computer, record its state at each clock cycle, and broadcast those states to a billion TV screens? Do we now weight the computer nine orders of magnitude more than we would otherwise?

Comment author: yttrium 28 July 2014 07:10:08AM *  0 points [-]

The rock on top of the computer wouldn't count into the "amount doing the computation". Apart from that, I agree that weight shouldn't be the right quantity. A better way to formulate what I am getting at would maybe be that "probability of being a mind is an extensive physical quantity". I have updated the post accordingly.

Regarding your second paragraph: No, the TV screens aren't part of the matter that does the computation.

Comment author: lavalamp 12 February 2014 01:11:19AM 0 points [-]

I think you're getting downvoted for your TL;DR, which is extremely difficult to parse. May I suggest:

TL;DR: Treating "computers running minds" as discrete objects might cause a paradox in probability calculations that involve self-location.

Comment author: yttrium 12 February 2014 08:06:25AM 1 point [-]

Changed it, that sounds better.

Comment author: Manfred 11 February 2014 08:33:54PM 0 points [-]

To the contrary, if the maximizer cares about the experienced number of paperclips, and each of the maximizers only learns about the paperclips generated by it's own choice regarding the given options

Right, that's why I split them up into different worlds, so that they don't get any utility from paperclips created by the other paperclip maximizer.

how much value to give each of the computer's votes is decided by the operators of the experiment, not the computers

Not true - see the Sleeping Beauty problem.

Comment author: yttrium 12 February 2014 07:55:58AM *  0 points [-]

I still think that the scenario you describe is not obviously and according to all philosophical intuitions the same as one where both minds exist in parallel.

Also, the expected number of paperclips (what you describe) is not equal to the expected experienced number of paperclips (what would be the relevant weighting for my post). After all, if A involves killing the maximizer before generating any paperclip, the paperclip-maximizer would choose A, while the experienced-paperclip-maximizer would choose B. The probability of experiencing paperclips would be obviously different from the probability of paperclips existing, when choosing A.

Comment author: Manfred 11 February 2014 04:43:19PM 2 points [-]

Suppose there's a paperclip maximizer that could either be running on a 1 kg computer or a 2 kg computer - say the humans flipped a coin when picking which computer to run it on.

Since the computations are the same, the paperclip maximizer doesn't know whether it's 1 kg or 2 kg until I tell it. But before I tell it, I offer the paperclip maximizer a choice between options A and B: A results in 5 paperclips if it's 1 kg and 0 otherwise, B results in 4 paperclips if it's 2 kg and 0 otherwise.

It seems like the paperclip-maximizing strategy is to give equal weight (ha) to being 1 kg and 2 kg, and pick A.

Comment author: yttrium 11 February 2014 07:25:28PM *  0 points [-]

If I understand you correctly, your scenario is different from the one I had in mind in that I'd have both computers instantiated at the same time (I've clarified that in the post), and then considering the relative probability of experiencing what the 1 kg computer experiences vs experiencing what the 2 kg computer experiences. It seems like one could adapt your scenario by creating a 1 kg and a 2 kg computer at the same time, offering both of them a choice between A and B, and then generating 5 paperclips if the 1 kg computer chooses A and (additionally) 4 paperclips if the 2 kg computer chooses B. Then, the right choice for both systems (who still can't distinguish themselves from each other) would still be A, but I don't see how this is related to the relative weight of both maximizer's experiences - after all, how much value to give each of the computer's votes is decided by the operators of the experiment, not the computers. To the contrary, if the maximizer cares about the experienced number of paperclips, and each of the maximizers only learns about the paperclips generated by it's own choice regarding the given options, I'd still say that the maximizer should choose B.

Weighting the probability of being a mind by the quantity of the matter composing the computer that calculates that mind

0 yttrium 11 February 2014 03:34PM

TL;DR by lavalamp: Treating "computers running minds" as discrete objects might cause a paradox in probability calculations that involve self-location. "The probability of being a certain mind" is probably an extensive physical quantity, i.e. rises proportionally to the size of the physical system doing the associated computations.

There are two computers simulating two minds. At some time, one of the minds is being shown a red light, and the other one is shown a green one (call this "Situation 1"). Conditioned on you being one of the minds, what is the probability you should assign to seeing red?

Naively, the answer seems to be 1/2, which comes from assigning being each of the minds an equal probability. If one had three computers and showed two of them a red light and the third one a green one, the probability would be calculated as 2/3, even if the red-seeing computers will be in exactly the same computational state at all times (call this "Situation 2").

However, I think that taking this point of view leads to paradoxes.

An example: Consider an electrical circuit made of (ideal) wires, resistors, capacitors and transistors (sufficient in principle to build a computer); the supply voltage comes from outside of the circuit considered. Under assumptions regarding the physical implementation of this circuit that do not restrict the possible circuit diagrams, it is possible to split the matter composing it into two part that both comprise working circuits reproducing the original circuit's behavior independently of the other part, in an analogous fashion to how the Ebborian's brains are split.* To clarify, what I have in mind is cutting up the wires and resistors orthogonally to their cross-sections - after the splitting, equivalent wires should be on equivalent potentials at the same time, but the currents flowing will be reduced by some factor.

Now imagine the circuit is a computer, simulating the mind that is going to see red in Situation 1 (the mind that will see green still exists). If one splits the circuit as described, one suddenly ends up with two circuits simulating the same mind, i.e. Situation 2 (let's imagine that the computers are split before they are turned on for the first time, so that stream-of-consciousness-considerations will not influence the calculated probability, like e.g. Deda answering 1/2 to Yu'el's question in the linked article). However, it is not clear how far the circuit components need to be apart from each other so that they should be considered "split". I.e., if one fixes a direction in which the circuits are moved apart and then defines P(d) as "the probability one should assign to seeing red, as a function of the distance by which the circuits have been moved apart), P(0) would be 1/2 and P(∞) would be 2/3 in the naive model, but there seems to be no intuitive way how the function should look like in between.

I think that therefore, it is more plausible that a way closer to the correct one to calculate the probability of having one mind's experiences involves somehow weighting this probability by the amount (maybe mass or electron count) of matter that calculates the mind. If one does this, after splitting, the matter comprising each of the parts will add up exactly to the matter of the original circuit, so P(d) would be constant over all distances.

What do you think?

*Namely, the resistors could be full cylinders with the wires protruding along the axes - one could then split them by a plane surface surface that includes the cylinder's axis and would end up with two resistors that have twice the resistance.

The capacitors could look exactly like in this picture and could then be split up along a plane that includes the wires, so that the capacitance is halved.

The transistors could look exactly like in this picture (being homogeneous in the z-direction), and be split up in half across a plane that is parallel to the picture shown).

If one does all of those splittings and splits up the wires so that the parts of each electronic component are connected in the same way as the original circuit was connected, and then operates the resulting circuits with the same supply voltage as one operated the original circuit, the voltages of all wires will always be the same as in the original circuit, and currents will be halved.

Consistence of reciprocity?

0 yttrium 16 December 2012 07:08PM

Many people see themselves in various groups (member of the population of their home country, or their social network), and feel justified in caring more about the well-being of people in this group than about that of others. They will argue with reciprocity: "Those people pay taxes in our country, they are entitled to more support from 'us' than others!" My question is: Is this inconsistent with some rationality axioms that seem obvious? What often-adopted or reasonable axioms are there that make this inconsistent?

Comment author: APMason 04 June 2012 12:49:34PM 4 points [-]

What happens if you're using this method and you're offered a gamble where you have a 49% chance of gaining 1000000utils and a 51% chance of losing 5utils (if you don't take the deal you gain and lose nothing). Isn't the "typical outcome" here a loss, even though we might really really want to take the gamble? Or have I misunderstood what you propose?

Comment author: yttrium 05 June 2012 04:38:49PM 0 points [-]

Depending on the rest of your utility distribution, that is probably true. Note, however, that an additional 10^6 utility in the right half of the utility function will change the median outcome of your "life": If 10^6 is larger than all the other utility you could ever receive, and you add a 49 % chance of receiving it, the 50th percentile utility after that should look like the 98th percentile utility before.

Comment author: CarlShulman 04 June 2012 10:11:46PM *  1 point [-]

A bounded utility function, on which increasing years of happy life (or money, or whatever) give only finite utility in the infinite limit, does not favor taking vanishing probabilities of immense payoffs. It also preserves normal expected utility calculations so that you can think about 90th percentile and 10th percentile, and lets you prefer higher payoffs in probable cases.

Basically, this "median outcome" heuristic looks like just a lossy compression of a bounded utility function's choice outputs, subject to new objections like APMason's. Why not just go with the bounded utility function?

Comment author: yttrium 05 June 2012 04:34:23PM 0 points [-]

I want that it is possible to have a very bad outcome: If I can play a lottery that has 1 utilium cost, 10^7 payoff and a winning chance of 10^-6, and if I can play this lottery enough times, I want to play it.

A plan for Pascal's mugging?

1 yttrium 04 June 2012 09:04AM

The idea is to compare not the results of actions, but the results of decision algorithms. The question that the agent should ask itself is thus:

"Suppose everyone1 who runs the same thinking procedure like me uses decision algorithm X. What utility would I get at the 50th percentile (not: what expected utility should I get), after my life is finished?"
Then, he should of course look for the X that maximizes this value.

Now, if you formulate a turing-complete "decision algorithm", this heads into an infinite loop. But suppose that "decision algorithm" is defined as a huge table for lots of different possible situations, and the appropriate outputs.

Let's see what results such a thing should give:

  • If the agent has the possibility to play a gamble, and the probabilities involved are not small, and he expects to be allowed to play many gambles like this in the future, he should decide exactly as if he was maximizing expected utility: If he has made many decisions like this, he will get a positive utility difference in the 50th percentile if and only if his expected utility from playing the gamble is positive.
  • However, if Pascal's mugger comes along, he will decline: The complete probability of living in a universe where people like this mugger ought to be taken seriously is small. In the probability distribution over expected utility at the end of the agent's lifetime, the possibility of getting tortured will manifest itself only very slightly at the 50th percentile - much less than the possibility of losing 5 Dollars.

The reason why humans will intuitively decline to give money to the mugger might be similar: They imagine not the expected utility with both decisions, but the typical outcome of giving the mugger some money, versus declining to.

1I say this to make agents of the same type cooperate in prisoner-like dilemmas.

List of underrated risks?

12 yttrium 30 May 2012 08:59PM

As everyone here knows, it would be a stupid idea to switch from airplanes to cars out of safety/terrorism concerns: Cars are a much more risky means of transportation than airplanes. But what other major risks are there that many people systematically undervalue or are not even consciously aware of?

The same can be asked for chances.

View more: Next