It seems as if your argument rests on the assertion that my access to facts about my physical condition is at least as good as my access to facts about the limitations of computation/simulation. You say the 'physical limitations', but I'm not sure why 'physical' in my universe is particularly relevant - what we care about is whether it's reasonable for there to be many simulations of someone like me over time or not.
I don't think this assertion is correct. I can make a statement about the limits of computation / simulation - i.e. that there is at least en...
That's very interesting, but isn't the level-1 thinking closer to deontological ethics than virtue ethics, since it is based on rules rather than on the character of the moral agent?
The point is that they are the kind of species to deal with situations like this in a more or less fairminded way. That will stand them in good stead in future difficult negotiatons with other aliens.
It's likely that anything around today has a huge impact on the state of the future universe. As I understood the article, the leverage penalty requires considering how unique your opportunity to have the impact would be too, so Archimedes had a massive impact, but there have also been a massive number of people through history who would have had the chance to come up with the same theories had they not already been discovered, so you have to offset Archimedes leverage penalty by the fact that he wasn't uniquely capable of having that leverage.
I tend to think death, but then I'm not sure that we genuinely survive from one second to another.
I don't have a good way to meaningfully define the kind of continuity that most people intuitively think we have and so I conclude that it could easily just be an illusion.
Thinking of probabilities as levels of uncertainty became very obvious to me when thinking about the Monty Hall problem. After the host has revealed that one of the three doors has a booby prize behind it, you're left with two doors, with a good prize behind one of them.
If someone walks into the room at that stage, and you tell them that there's a good prize behind one door and a booby prize behind another, they will say that it's a 50/50 chance of selecting the door with the prize behind it. They're right for themselves, however the person who had been...
The complaints about Omega being untrustworthy are weak. Just reformulate the problem so Omega says to all agents, simulated or otherwise, "You are participating in a game that involves simulated agents and you may or may not be one of the simulated agents yourself. The agents involved in the game are the following: <describes agents' roles in third person>".
Good point.
That clears up the summing utility across possible worlds possibility, but it still doesn't address the fact that the TDT agent is being asked to (potentially) make two de...
Yes. I think that as long as there is any chance of you being the simulated agent, then you need to one box. So you one box if Omega tells you 'I simulated some agent', and one box if Omega tells you 'I simulated an agent that uses the same decision procedure as you', but two box if Omega tells you 'I simulated an agent that had a different copywrite comment in its source code to the comment in your source code'.
This is just a variant of the 'detect if I'm in a simulation' function that others mention. i.e. if Omega gives you access to that information in any way, you can two box. Of course, I'm a bit stuck on what Omega has told the simulation in that case. Has Omega done an infinite regress?
3x sounds really scary, but I have no knowledge of whether a 4 micron extra loss of sound dentin is something to be concerned about or not.
In a prisoners dilemma Alice and Bob affect each others outcomes. In the newcomb problem, Alice affects Bobs outcome, but Bob doesn't affect Alices outcome. That's why it's OK for Bob to consider himself different in the second case as long as he knows he is definitely not Alice (because otherwise he might actually be in a simulation) but not OK for him to consider himself different in the prisoners dilemma.
The key thing is the question as to whether it could have been you that has been simulated. If all you know is that you're a TDT agent and what Omega simulated is a TDT agent, then it could have been you. Therefore you have to act as if your decision now may either real or simulated. If you know you are not what Omega simulated (for any reason), then you know that you only have to worry about the 'real' decision.
I don't think that the ability to simulate without rewarding the simulation is what pushes it over the threshold of "unfair".
It only seems that way because you're thinking from the non-simulated agents point of view. How do you think you'd feel if you were a simulated agent, and after you made your decision Omega said 'Ok, cheers for solving that complicated puzzle, I'm shutting this reality down now because you were just a simulation I needed to set a problem in another reality'. That sounds pretty unfair to me. Wouldn't you be saying '...
Omega (who experience has shown is always truthful)
Omega doesn't need to simulate the agent actually getting the reward. After the agent has made its choice, the simulation can just end.
If we are assuming that Omega is trustworthy, then Omega needs to be assumed to be trustworthy in the simulation too. If they didn't allow the simulated version of the agent to enjoy the fruits of their choice, then they would not be trustworthy.
Actually, I'm not sure this matters. If the simulated agent knows he's not getting a reward, he'd still want to choose so that the nonsimulated version of himself gets the best reward.
So the problem is that the best answer is unavailable to the simulated agent: in the simulation you should one box and in the 'real' problem you'd like to two box, but you have no way of knowing whether you're in the simulation or the real problem.
Agents that Omega didn't simulate don't have the problem of worrying whether they're making the decision in a simulation or not, ...
The success is said to be by a researcher who has previously studied the effect of "geomagnetic pulsations" on ESP, but I could not locate it online.
Can we have a prejudicial summary of the previous studies of the 6 researchers who failed to replicate the effect too?
I noticed that if I'm apathetic about doing a task, then I also tend to be apathetic about thinking about doing the task, whereas tasks that I get done I tend to be so enthusiastic about that I have planned them and done them in my head long before I do them in physicality. My conclusion: apathy starts in the mind and the cure for it starts in the mind too.
...But what if the doctor is confident of keeping it a secret? Well, then causal decision theory would indeed tell her to harvest his organs, but TDT (and also UDT) would strongly advise her against it. Because if TDT endorsed the action, then other people would be able to deduce that TDT endorsed the action, and that (whether or not it had happened in any particular case) their lives would be in danger in any hospital run by a timeless decision theorist, and then we'd be in much the same boat. Therefore TDT calculates that the correct thing for TDT to outpu
Set up questions that require you assume something odd in the preamble, and then conclude with something unpalatable (and quite possibly false). This tests to see if people can apply rationality even when it goes against their emotional involvement and current beliefs. As well as checking that they reach the conclusion demanded (logic), also give them an opportunity as part of a later question to flag up the premise that they feel caused the odd conclusion.
Something bayesian - like the medical test questions where the incidence in the general population ...
I agree. In particular I often find these discussions very frustrating because people arguing for elimination seem to think they are arguing about the 'reality' of things when in fact they're arguing about the scale of things. (And sometimes about the specificity of the underlying structures that the higher level systems are implemented on). I don't think anyone ever expected to be able to locate anything important in a single neuron or atom. Nearly everything interesting in the universe is found in the interactions of the parts not the parts themselves....
And the Archangel has decided to take some general principles (which are rules) and implant them in the habit and instinct of the children. I suppose you could argue that the system implanted is a deontological one from the Archangels point of view, and merely instinctual behaviour from the childrens point of view. I'd still feel that calling instinctual behaviour 'virtue ethics' is a bit strange.