Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ialdabaoth 18 June 2014 11:52:10PM 1 point [-]

level-1 thinking is actually based on habit and instinct more than rules; rules are just a way to describe habit and instinct.

Comment author: kybernetikos 19 June 2014 12:08:31AM 1 point [-]

And the Archangel has decided to take some general principles (which are rules) and implant them in the habit and instinct of the children. I suppose you could argue that the system implanted is a deontological one from the Archangels point of view, and merely instinctual behaviour from the childrens point of view. I'd still feel that calling instinctual behaviour 'virtue ethics' is a bit strange.

Comment author: JonathanBirch 19 May 2013 08:51:57PM 13 points [-]

Thanks, everyone, for your comments on my paper. It’s great to see that it is generating discussion. I think I ought to take this opportunity to give a brief explanation of the argument I make in the paper, for the benefit of those who haven’t read it.

The basic argument goes like this. In the first section, I point out that the ‘Simulation Argument’ invokes (at different stages) two assumptions that I call Good Evidence (GE) and Impoverished Evidence (IE). GE is the assumption that I possess good evidence regarding the true physical limits of computation. IE is the assumption that my current evidence does not support any empirical claims non-neutral with respect to the hypothesis (SIM) that I am simulated—for example, the empirical claim that I possess two physically real human hands.

Although GE and IE may look in tension with one another, they are not necessarily incompatible. We can generate a genuine incompatibility, however, by introducing a third claim, Parity of Evidence (PE), stating that my epistemic access to the facts about my own physical constitution is at least as good as my epistemic access to the facts about the true physical limits of computation. Since GE, IE and PE are jointly incompatible, at least one of them must be false.

My own view (and a common view, I imagine) is that IE is false, while GE and PE are true. But rejecting IE would fatally compromise the Simulation Argument. So I spend most of the paper considering the two alternatives open to Bostrom: i.e., rejecting GE or rejecting PE. I argue that, if Bostrom rejects GE, the Simulation Argument still fails. I then argue that, if he rejects PE, the Simulation Argument succeeds, but it’s pretty hard to see how PE could be false. So neither of these alternatives is particularly promising.

One common response I’ve encountered focusses on GE, and asks: why does Bostrom actually need GE? Surely all he really needs is the conditional assumption that, if my evidence is veridical, then GE is true. This conditional assumption allows him to say that, if my evidence is veridical, then the Simulation Argument goes through in its original form; whereas if my evidence is not veridical because I’m simulated, then I’m simulated—so we just end up at the same conclusion by a different route.

This is roughly the line of response pressed here by Benja and Eliezer Yudkowsky. It’s a very reasonable response to my argument, but I don’t think it works. The quick explanation is that it’s just not true that, conditional on my evidence being veridical, the Simulation Argument goes through in its original form. This is essentially because conditionalizing on my evidence being veridical makes SIM a lot less likely than it otherwise would be, and this vitiates the indifference-based reasoning on which the Simulation Argument is based. But Benja is right to press me on the formal details here, so I’ll reply to his objection in a separate comment.

Comment author: kybernetikos 19 June 2014 12:02:28AM *  0 points [-]

It seems as if your argument rests on the assertion that my access to facts about my physical condition is at least as good as my access to facts about the limitations of computation/simulation. You say the 'physical limitations', but I'm not sure why 'physical' in my universe is particularly relevant - what we care about is whether it's reasonable for there to be many simulations of someone like me over time or not.

I don't think this assertion is correct. I can make a statement about the limits of computation / simulation - i.e. that there is at least enough simulation power in the universe to simulate me and everything I am aware of, that is true whether I am in a simulation or in a top level universe, or even whether I believe in matter and physics at all.

I believe that this assertion, that the top level universe contains at least enough simulation power to simulate someone like myself and everything of which they are aware is something that I have better evidence for than the assertion that I have physical hands.

Have I misunderstood the argument, or do you disagree that I have better evidence for a minimum bound to simulation power than for any specific physical attribute?

Comment author: Ruby 18 June 2014 03:23:42AM *  22 points [-]

If ever you want to refer to an elaboration and justification of this position, see R. M. Hare's two-level utilitarianism, expounded best in this paper: Ethicial Theory and Utilitarianism (see pp. 30-36).

To argue in this way is entirely to neglect the importance for moral philosophy of a study of moral education. Let us suppose that a fully informed archangelic act-utilitarian is thinking about how to bring up his children. He will obviously not bring them up to practise on every occasion on which they are confronted with a moral question the kind of arch angelic thinking that he himself is capable of [complete consequentialist reasoning]; if they are ordinary children, he knows that they will get it wrong. They will not have the time, or the information, or the self-mastery to avoid self-deception prompted by self-interest; this is the real, as opposed to the imagined, veil of ignorance which determines our moral principles.

So he will do two things. First, he will try to implant in them a set of good general principles. I advisedly use the word 'implant'; these are not rules of thumb, but principles which they will not be able to break without the greatest repugnance, and whose breach by others will arouse in them the highest indignation. These will be the principles they will use in their ordinary level-1 moral thinking, especially in situations of stress. Secondly, since he is not always going to be with them, and since they will have to educate their children, and indeed continue to educate themselves, he will teach them,as far as they are able, to do the kind of thinking that he has been doing himself. This thinking will have three functions. First of all, it will be used when the good general principles conflict in particular cases. If the principles have been well chosen, this will happen rarely; but it will happen. Secondly, there will be cases (even rarer) in which, though there is no conflict between general principles, there is something highly unusual about the case which prompts the question whether the general principles are really fitted to deal with it. But thirdly, and much the most important, this level-2 thinking will be used to select the general principles to be taught both to this and to succeeding generations. The general principles may change, and should change (because the environment changes). And note that, if the educator were not (as we have supposed him to be) arch angelic, we could not even assume that the best level-1 principles were imparted in the first place; perhaps they might be improved.

How will the selection be done? By using level-2 thinking to consider cases, both actual and hypothetical, which crucially illustrate, and help to adjudicate, disputes between rival general principles.

Comment author: kybernetikos 18 June 2014 11:12:47PM 4 points [-]

That's very interesting, but isn't the level-1 thinking closer to deontological ethics than virtue ethics, since it is based on rules rather than on the character of the moral agent?

Comment author: rickc 17 August 2012 10:06:16PM 0 points [-]

First-time commenter.

Perhaps I missed this in a comment to a previous part, but I don't see why we have to assume the super-happies are honoring the original plan. If their negotiations with the baby-eaters failed, the SH owe the BE nothing. They have no reason not to forcibly modify the BE, and, consequently, no reason to alter themselves or the humans to eat babies. (They could have also simply wiped out the BE, but genocide seems like a worse solution than "fixing" the BEs.)

Comment author: kybernetikos 16 July 2013 09:51:37PM *  0 points [-]

The point is that they are the kind of species to deal with situations like this in a more or less fairminded way. That will stand them in good stead in future difficult negotiatons with other aliens.

Comment author: shminux 06 May 2013 04:14:54PM 1 point [-]

The prior probability of us being in a position to impact a googolplex people is on the order of one over googolplex, so your equations must be wrong

That's not at all how validity of physical theories is evaluated. Not even a little bit.

By that logic, you would have to reject most current theories. For example, Relativity restricted the maximum speed of travel, thus revealing that countless future generations will not be able to reach the stars. Archimedes's discovery of the buoyancy laws enabled future naval battles and ocean faring, impacting billions so far (which is not a googolplex, but the day is still young). The discovery of fission and fusion still has the potential to destroy all those potential future lives. Same with computer research.

The only thing that matters in physics is the old mundane "fits current data, makes valid predictions". Or at least has the potential to make testable predictions some time down the road. The only time you might want to bleed (mis)anthropic considerations into physics is when you have no way of evaluating the predictive power of various models and need to decide which one is worth pursuing. But that is not physics, it's decision theory.

Once you have a testable working theory, your anthropic considerations are irrelevant for evaluating its validity.

Comment author: kybernetikos 08 May 2013 12:17:14PM 1 point [-]

It's likely that anything around today has a huge impact on the state of the future universe. As I understood the article, the leverage penalty requires considering how unique your opportunity to have the impact would be too, so Archimedes had a massive impact, but there have also been a massive number of people through history who would have had the chance to come up with the same theories had they not already been discovered, so you have to offset Archimedes leverage penalty by the fact that he wasn't uniquely capable of having that leverage.

Comment author: BrassLion 01 October 2012 02:56:06PM 0 points [-]

Voted Lean: Death but want to change my answer to "have no bloody clue". For the record, when I first thought about this I was accept: death.

Comment author: kybernetikos 02 May 2013 03:24:22PM 1 point [-]

I tend to think death, but then I'm not sure that we genuinely survive from one second to another.

I don't have a good way to meaningfully define the kind of continuity that most people intuitively think we have and so I conclude that it could easily just be an illusion.

Comment author: kybernetikos 21 November 2012 10:42:03PM *  3 points [-]

Thinking of probabilities as levels of uncertainty became very obvious to me when thinking about the Monty Hall problem. After the host has revealed that one of the three doors has a booby prize behind it, you're left with two doors, with a good prize behind one of them.

If someone walks into the room at that stage, and you tell them that there's a good prize behind one door and a booby prize behind another, they will say that it's a 50/50 chance of selecting the door with the prize behind it. They're right for themselves, however the person who had been in the room originally and selected a door knows more and therefore can assign different probabilities - i.e. 1/3 for the door they'd selected and 2/3 for the other door.

If you thought that the probabilites were 'out there' rather than descriptions of the state of knowledge of the individuals, you'd be very confused about how the probability of choosing correctly could at the same time be 2/3 and 1/2.

Considering the Monty Hall problem as a way for a part of the information in the hosts head to be communicated to the contestant becomes the most natural way of thinking about it.

Comment author: JGWeissman 06 June 2012 04:20:02PM 0 points [-]

It only seems that way because you're thinking from the non-simulated agents point of view. How do you think you'd feel if you were a simulated agent, and after you made your decision Omega said 'Ok, cheers for solving that complicated puzzle, I'm shutting this reality down now because you were just a simulation I needed to set a problem in another reality'. That sounds pretty unfair to me. Wouldn't you be saying 'give me my money you cheating scum'?

We were discussing if it is a "fair" test of the decision theory, not if it provides a "fair" experience to any people/agents that are instantiated within the scenario.

And as has been already pointed out, they're very different problems. If Omega actually is trustworthy, integrating across all the simulations gives infinite utility for all the (simulated) TDT agents and a total $1001000 utility for the (supposedly non-simulated) CDT agent.

I am aware that they are different problems. That is why the version of the problem in which simulated agents get utility that the real agent cares about does nothing to address the criticism of TDT that it loses in the version where simulated agents get no utility. Postulating the former in response to the latter was a fail in using the Least Convenient Possible World.

The complaints about Omega being untrustworthy are weak. Just reformulate the problem so Omega says to all agents, simulated or otherwise, "You are participating in a game that involves simulated agents and you may or may not be one of the simulated agents yourself. The agents involved in the game are the following: <describes agents' roles in third person>".

Comment author: kybernetikos 19 June 2012 07:46:29AM 0 points [-]

The complaints about Omega being untrustworthy are weak. Just reformulate the problem so Omega says to all agents, simulated or otherwise, "You are participating in a game that involves simulated agents and you may or may not be one of the simulated agents yourself. The agents involved in the game are the following: <describes agents' roles in third person>".

Good point.

That clears up the summing utility across possible worlds possibility, but it still doesn't address the fact that the TDT agent is being asked to (potentially) make two decisions while the non-TDT agent is being asked to make only one. That seems to me to make the scenario unfair (it's what I was trying to get at in the 'very different problems' statement).

Comment author: JGWeissman 06 June 2012 04:34:19PM 0 points [-]

Suppose that Omega doesn't reveal the full source code of the simulated TDT agent, but just reveals enough logical facts about the simulated TDT agent to imply that it uses TDT. Then the "real" TDT Prime agent cannot deduce that it is different.

Comment author: kybernetikos 19 June 2012 07:30:10AM *  0 points [-]

Yes. I think that as long as there is any chance of you being the simulated agent, then you need to one box. So you one box if Omega tells you 'I simulated some agent', and one box if Omega tells you 'I simulated an agent that uses the same decision procedure as you', but two box if Omega tells you 'I simulated an agent that had a different copywrite comment in its source code to the comment in your source code'.

This is just a variant of the 'detect if I'm in a simulation' function that others mention. i.e. if Omega gives you access to that information in any way, you can two box. Of course, I'm a bit stuck on what Omega has told the simulation in that case. Has Omega done an infinite regress?

Comment author: khafra 06 June 2012 12:07:47PM 2 points [-]

Good question. I looked, and--although electric toothbrushes do remove more plaque and tartar--they also remove 3x as much healthy tooth enamel.

Comment author: kybernetikos 06 June 2012 12:49:40PM *  3 points [-]

3x sounds really scary, but I have no knowledge of whether a 4 micron extra loss of sound dentin is something to be concerned about or not.

View more: Next