Will_Sawin comments on Bayesians vs. Barbarians - Less Wrong

51 Post author: Eliezer_Yudkowsky 14 April 2009 11:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (270)

You are viewing a single comment's thread.

Comment author: Will_Sawin 20 June 2010 01:35:18AM -1 points [-]

I know this post is long, long dead but:

if they have common knowledge of each other's source code.

Isn't this a logical impossibility? To have knowledge is to contain it in your source code, so A is contained in B, and B is contained in A...

Alternatively, I'm considering all the strategies I could use, based on looking at my opponent's strategy, and one of them is "Cooperate only if the opponent, when playing against himself, would defect."

"Common knowledge of each other's rationality" doesn't seem to help. Knowing I use TDT doesn't give someone the ability to make the same computation I do, and so engage TDT. They have to actually look into my brain, which means they need a bigger brain, which means I can't look into their brain. If I meet one of your perfectly rational agents who cooperates on true prisoners dilemma, I'm going to defect. And win. Rationalists should win.

Comment author: wedrifid 20 June 2010 02:55:31AM 5 points [-]

Knowing I use TDT doesn't give someone the ability to make the same computation I do, and so engage TDT.

It is possible to predict the output of a system without emulating the system. We can use the idea 'of emulating their behavior' if it helps as an intuition pump but to assume that it is required is a mistake.

If I meet one of your perfectly rational agents who cooperates on true prisoners dilemma, I'm going to defect. And win.

Why on earth would I cooperate with you? You just told me you were going to defect!

(But I do respect your grappling with the problem. It is NOT trivial. Well, I should say it is trivial but it is hard to get your head around it, particularly with our existing intuitions.)

Comment author: DanielLC 03 May 2013 06:21:15AM *  3 points [-]

A = "Preceded by it's own quotation with A's and B's swapped is B's source code" preceded by it's own quotation with A's and B's swapped is B's source code. B = "Preceded by it's own quotation with B's and A's swapped is A's source code" preceded by it's own quotation with B's and A's swapped is A's source code.

A and B each now contain the other's source code.

Edit: I used "followed" when it should have been "preceded".

Comment author: wedrifid 20 June 2010 02:49:28AM 3 points [-]

Isn't this a logical impossibility? To have knowledge is to contain it in your source code, so A is contained in B, and B is contained in A...

No. If you know all relevant data yourself you don't have to know it again just because B knows it. That is just a naive, inefficient way to implement the 'source code'. Call the code 'DRY' for example. Or consider it an instruction to do a 'shallow copy' and a 'memory free' after a getting a positive result for a 'deep compare'.

Comment author: Qiaochu_Yuan 03 May 2013 06:45:51AM *  2 points [-]

Isn't this a logical impossibility? To have knowledge is to contain it in your source code, so A is contained in B, and B is contained in A...

The idea is that A and B are passed each other's source code as input (and know their own source code thanks to that theorem that guarantees that Turing machines have access to their own source code WLOG, which I think DanielLC's comment proves). There's no reason you can't do this, although you won't be able to deduce whether your opponent halts and so forth.

Alternatively, I'm considering all the strategies I could use, based on looking at my opponent's strategy, and one of them is "Cooperate only if the opponent, when playing against himself, would defect."

Your opponent might not halt when given himself as input.

Comment author: MinibearRex 27 March 2011 02:16:48PM 1 point [-]

If I meet one of your perfectly rational agents who cooperates on true prisoners dilemma, I'm going to defect. And win. Rationalists should win.

The problem with your plan is that TDT agents don't always cooperate. I will only cooperate if I have reason to believe that you and I are similar enough that we will decide to do the same thing for the same reasons. I hate to burst your bubble, but you are not the first person in all of recorded history to think of this. Other people are allowed to be smart too. If you come up with a clever reason to defect when playing against me, it is very possible (perhaps even likely, although I don't know you all that well) that I will think of it too.