Comment author: AlexMennen 04 June 2012 11:39:19PM 0 points [-]

But a couple of difficulties arise. The first is that if TDT variants can logically separate from each other (i.e. can prove that their decisions aren't linked) then they won't co-operate with each other in Prisoner's Dilemma. We could end up with a bunch of CliqueBots that only co-operate with their exact clones, which is not ideal.

I think this is avoidable. Let's say that there are two TDT programs called Alice and Bob, which are exactly identical except that Alice's source code contains a comment identifying it as Alice, whereas Bob's source code contains a comment identifying it as Bob. Each of them can read their own source code. Suppose that in problem 1, Omega reveals that the source code it used to run the simulation was Alice. Alice has to one-box. But Bob faces a different situation than Alice does, because he can find a difference between his own source code and the one Omega simulated, whereas Alice could not. So Bob can two-box without effecting what Alice would do.

However, if Alice and Bob play the prisoner's dilemma against each other, the situation is much closer to symmetric. Alice faces a player identical to itself except with the "Alice" comment replaced with "Bob", and Bob faces a player identical to itself except with the "Bob" comment replaced with "Alice". Hopefully, their algorithm would compress this information down to "The other player is identical to me, but has a comment difference in its source code", at which point each player would be in an identical situation.

Comment author: kybernetikos 06 June 2012 12:12:51PM *  0 points [-]

In a prisoners dilemma Alice and Bob affect each others outcomes. In the newcomb problem, Alice affects Bobs outcome, but Bob doesn't affect Alices outcome. That's why it's OK for Bob to consider himself different in the second case as long as he knows he is definitely not Alice (because otherwise he might actually be in a simulation) but not OK for him to consider himself different in the prisoners dilemma.

Comment author: cousin_it 23 May 2012 05:54:33PM *  11 points [-]

I'm not sure the part about comparing source code is correct. TDT isn't supposed to search for exact copies of itself, it's supposed to search for parts of the world that are logically equivalent to itself.

Comment author: kybernetikos 06 June 2012 12:05:55PM 0 points [-]

The key thing is the question as to whether it could have been you that has been simulated. If all you know is that you're a TDT agent and what Omega simulated is a TDT agent, then it could have been you. Therefore you have to act as if your decision now may either real or simulated. If you know you are not what Omega simulated (for any reason), then you know that you only have to worry about the 'real' decision.

Comment author: JGWeissman 24 May 2012 03:18:00AM 0 points [-]

Of course, this doesn't work if the simulated TDT agent is not aware that it won't receive a reward.

The simulated TDT agent is not aware that it won't receive a reward, and therefore it does not work.

This strays pretty close to "Omega is all-powerful and out to make sure you lose"-type problems.

Yeah, it doesn't seem right to me that the decision theory being tested is used in the setup of the problem. But I don't think that the ability to simulate without rewarding the simulation is what pushes it over the threshold of "unfair".

Comment author: kybernetikos 06 June 2012 11:54:52AM *  0 points [-]

I don't think that the ability to simulate without rewarding the simulation is what pushes it over the threshold of "unfair".

It only seems that way because you're thinking from the non-simulated agents point of view. How do you think you'd feel if you were a simulated agent, and after you made your decision Omega said 'Ok, cheers for solving that complicated puzzle, I'm shutting this reality down now because you were just a simulation I needed to set a problem in another reality'. That sounds pretty unfair to me. Wouldn't you be saying 'give me my money you cheating scum'?

And as has been already pointed out, they're very different problems. If Omega actually is trustworthy, integrating across all the simulations gives infinite utility for all the (simulated) TDT agents and a total $1001000 utility for the (supposedly non-simulated) CDT agent.

Comment author: Khoth 23 May 2012 11:15:39AM 8 points [-]

Omega doesn't need to simulate the agent actually getting the reward. After the agent has made its choice, the simulation can just end.

Comment author: kybernetikos 06 June 2012 11:51:02AM 0 points [-]

Omega (who experience has shown is always truthful)

Omega doesn't need to simulate the agent actually getting the reward. After the agent has made its choice, the simulation can just end.

If we are assuming that Omega is trustworthy, then Omega needs to be assumed to be trustworthy in the simulation too. If they didn't allow the simulated version of the agent to enjoy the fruits of their choice, then they would not be trustworthy.

Comment author: Khoth 23 May 2012 11:15:39AM 8 points [-]

Omega doesn't need to simulate the agent actually getting the reward. After the agent has made its choice, the simulation can just end.

Comment author: kybernetikos 01 June 2012 10:01:33PM *  0 points [-]

Actually, I'm not sure this matters. If the simulated agent knows he's not getting a reward, he'd still want to choose so that the nonsimulated version of himself gets the best reward.

So the problem is that the best answer is unavailable to the simulated agent: in the simulation you should one box and in the 'real' problem you'd like to two box, but you have no way of knowing whether you're in the simulation or the real problem.

Agents that Omega didn't simulate don't have the problem of worrying whether they're making the decision in a simulation or not, so two boxing is the correct answer for them.

The decisions being made are very different between an agent that has to make the decision twice and the first decision will affect the payoff of the second versus an agent that has to make the decision only once, so I think that in reality perhaps the problem does collapse down to an 'unfair' one because the TDT agent is presented with an essentially different problem to a nonTDT agent.

Comment author: kybernetikos 14 March 2012 09:26:02AM 0 points [-]

The success is said to be by a researcher who has previously studied the effect of "geomagnetic pulsations" on ESP, but I could not locate it online.

Can we have a prejudicial summary of the previous studies of the 6 researchers who failed to replicate the effect too?

Comment author: kybernetikos 10 February 2012 05:46:48PM *  4 points [-]

I noticed that if I'm apathetic about doing a task, then I also tend to be apathetic about thinking about doing the task, whereas tasks that I get done I tend to be so enthusiastic about that I have planned them and done them in my head long before I do them in physicality. My conclusion: apathy starts in the mind and the cure for it starts in the mind too.

Comment author: kybernetikos 02 September 2011 08:46:01AM *  1 point [-]

But what if the doctor is confident of keeping it a secret? Well, then causal decision theory would indeed tell her to harvest his organs, but TDT (and also UDT) would strongly advise her against it. Because if TDT endorsed the action, then other people would be able to deduce that TDT endorsed the action, and that (whether or not it had happened in any particular case) their lives would be in danger in any hospital run by a timeless decision theorist, and then we'd be in much the same boat. Therefore TDT calculates that the correct thing for TDT to output in order to maximize utility is "Don't kill the traveler,"5 and thus the doctor doesn't kill the traveler.

This doesn't follow the spirit of the keeping it secret part of the setup. If we know the exact mechanism that the doctor uses to make decisions then we would be able to deduce that he probably saved those five patients with the organs from the missing traveller, so it's no longer secret. To fairly accept the thought experiment, the doctor has to be certain that nobody will be able to deduce what he's done.

It seems to me that you haven't really denied the central point, which is that under consequentialism the doctor should harvest the organs if he is certain that nobody will be able to deduce what he has done.

Comment author: AnnaSalamon 29 March 2009 06:24:30PM 1 point [-]

If you want to test for rationality, ask questions that require rationality to get the right answer.

Any suggestions? That's basically the idea with section D (the heuristics and biases type questions, that have correct answers) and (with more interpretive ambiguity, because it is less obvious which beliefs are correct) with section E (questions about current beliefs).

Comment author: kybernetikos 04 August 2011 07:25:11PM 1 point [-]

Set up questions that require you assume something odd in the preamble, and then conclude with something unpalatable (and quite possibly false). This tests to see if people can apply rationality even when it goes against their emotional involvement and current beliefs. As well as checking that they reach the conclusion demanded (logic), also give them an opportunity as part of a later question to flag up the premise that they feel caused the odd conclusion.

Something bayesian - like the medical test questions where the incidence in the general population is really low, but that specific one has been done so much loads of people know it. Maybe take some stats from newspaper reports and see if appropriate conclusions can be drawn.

"When was the last time you changed your mind about something you believed?" tests peoples ability to apply their rationality.

Comment author: Logos01 21 July 2011 07:27:19PM 8 points [-]

Just because something only exists at high levels of abstraction doesn't mean it's not real or explanatory.

I have often stated that, as a physicalist, the mere fact that something does not independently exist -- that is, it has no physically discrete existence -- does not mean it isn't real. The number three is real -- but does not exist. It cannot be touched, sensed, or measured; yet if there are three rocks there really are three rocks. I define "real" as "a pattern that proscriptively constrains that which exists". A human mind is real; but there is no single part of your physical body you can point to and say, "this is your mind". You are the pattern that your physical components conform to.

It seems very often that objections to reductionism are founded in a problem of scale: the inability to recognize that things which are real from one perspective remain real at that perspective even if we consider a different scale.

It would seem, to me, that "eliminativism" is essentially a redux of this quandary but in terms of patterns of thought rather than discrete material. It's still a case of missing the forest for the trees.

Comment author: kybernetikos 22 July 2011 09:14:51AM *  0 points [-]

I agree. In particular I often find these discussions very frustrating because people arguing for elimination seem to think they are arguing about the 'reality' of things when in fact they're arguing about the scale of things. (And sometimes about the specificity of the underlying structures that the higher level systems are implemented on). I don't think anyone ever expected to be able to locate anything important in a single neuron or atom. Nearly everything interesting in the universe is found in the interactions of the parts not the parts themselves. (Also - why would we expect any biological system to do one thing and one thing only?).

I regard almost all these questions as very similar to the demarcation problem. A higher level abstraction is real if it provides predictions that often turn out to be true. It's acceptable for it to be an incomplete / imperfect model, although generally speaking if there is another that provides better predictions we should adopt it instead.

This is what would convince me that preferences were not real: At the moment I model other people by imagining that they have preferences. Most of the time this works. The eliminativist needs to provide me with an alternate model that reliably provides better predictions. Arguments about theory will not sway me. Show me the model.

View more: Prev | Next