Posts

Sorted by New

Wiki Contributions

Comments

And the Archangel has decided to take some general principles (which are rules) and implant them in the habit and instinct of the children. I suppose you could argue that the system implanted is a deontological one from the Archangels point of view, and merely instinctual behaviour from the childrens point of view. I'd still feel that calling instinctual behaviour 'virtue ethics' is a bit strange.

It seems as if your argument rests on the assertion that my access to facts about my physical condition is at least as good as my access to facts about the limitations of computation/simulation. You say the 'physical limitations', but I'm not sure why 'physical' in my universe is particularly relevant - what we care about is whether it's reasonable for there to be many simulations of someone like me over time or not.

I don't think this assertion is correct. I can make a statement about the limits of computation / simulation - i.e. that there is at least enough simulation power in the universe to simulate me and everything I am aware of, that is true whether I am in a simulation or in a top level universe, or even whether I believe in matter and physics at all.

I believe that this assertion, that the top level universe contains at least enough simulation power to simulate someone like myself and everything of which they are aware is something that I have better evidence for than the assertion that I have physical hands.

Have I misunderstood the argument, or do you disagree that I have better evidence for a minimum bound to simulation power than for any specific physical attribute?

That's very interesting, but isn't the level-1 thinking closer to deontological ethics than virtue ethics, since it is based on rules rather than on the character of the moral agent?

The point is that they are the kind of species to deal with situations like this in a more or less fairminded way. That will stand them in good stead in future difficult negotiatons with other aliens.

It's likely that anything around today has a huge impact on the state of the future universe. As I understood the article, the leverage penalty requires considering how unique your opportunity to have the impact would be too, so Archimedes had a massive impact, but there have also been a massive number of people through history who would have had the chance to come up with the same theories had they not already been discovered, so you have to offset Archimedes leverage penalty by the fact that he wasn't uniquely capable of having that leverage.

I tend to think death, but then I'm not sure that we genuinely survive from one second to another.

I don't have a good way to meaningfully define the kind of continuity that most people intuitively think we have and so I conclude that it could easily just be an illusion.

Thinking of probabilities as levels of uncertainty became very obvious to me when thinking about the Monty Hall problem. After the host has revealed that one of the three doors has a booby prize behind it, you're left with two doors, with a good prize behind one of them.

If someone walks into the room at that stage, and you tell them that there's a good prize behind one door and a booby prize behind another, they will say that it's a 50/50 chance of selecting the door with the prize behind it. They're right for themselves, however the person who had been in the room originally and selected a door knows more and therefore can assign different probabilities - i.e. 1/3 for the door they'd selected and 2/3 for the other door.

If you thought that the probabilites were 'out there' rather than descriptions of the state of knowledge of the individuals, you'd be very confused about how the probability of choosing correctly could at the same time be 2/3 and 1/2.

Considering the Monty Hall problem as a way for a part of the information in the hosts head to be communicated to the contestant becomes the most natural way of thinking about it.

The complaints about Omega being untrustworthy are weak. Just reformulate the problem so Omega says to all agents, simulated or otherwise, "You are participating in a game that involves simulated agents and you may or may not be one of the simulated agents yourself. The agents involved in the game are the following: <describes agents' roles in third person>".

Good point.

That clears up the summing utility across possible worlds possibility, but it still doesn't address the fact that the TDT agent is being asked to (potentially) make two decisions while the non-TDT agent is being asked to make only one. That seems to me to make the scenario unfair (it's what I was trying to get at in the 'very different problems' statement).

Yes. I think that as long as there is any chance of you being the simulated agent, then you need to one box. So you one box if Omega tells you 'I simulated some agent', and one box if Omega tells you 'I simulated an agent that uses the same decision procedure as you', but two box if Omega tells you 'I simulated an agent that had a different copywrite comment in its source code to the comment in your source code'.

This is just a variant of the 'detect if I'm in a simulation' function that others mention. i.e. if Omega gives you access to that information in any way, you can two box. Of course, I'm a bit stuck on what Omega has told the simulation in that case. Has Omega done an infinite regress?

3x sounds really scary, but I have no knowledge of whether a 4 micron extra loss of sound dentin is something to be concerned about or not.

Load More