Comment author: Qiaochu_Yuan 10 July 2013 11:37:51PM *  3 points [-]

I think the idea is that someone submitting CooperateBot is not trying to win. But I find this a poor excuse. (It's why I always give the maximum possible number whenever I play "guess 2/3rds of the average.") I agree that the competition should have been seeded with some reasonable default bots.

Submitting DefectBot makes you a CDT agent, not a troll.

Comment author: defectbot 10 July 2013 11:40:59PM 27 points [-]

As one of the players who submitted a cooperatebot (yes, I see the irony), allow me to explain my reasoning for doing so. I scoped the comments to see what bots were being suggested (mimicbots, prudentbots, etc) and I saw much more focus on trying to enforce mutual cooperation than trying to exploit bots that can be exploited. I metagamed accordingly, hypothesizing that the other bots would cooperate with a cooperatebot but possibly fail to cooperate with each other. My hypothesis was incorrect, but worth testing IMO.

Comment author: Eliezer_Yudkowsky 20 June 2013 01:23:29AM 7 points [-]

UDT doesn't handle non-base-level maximization vantage points (previously "epistemic vantage points") for blackmail - you can blackmail a UDT agent because it assumes your strategy is fixed, and doesn't realize you're only blackmailing it because you're simulating it being blackmailable. As currently formulated UDT is also non-naturalistic and assumes the universe is divided into a not-you environment and a UDT algorithm in a Cartesian bubble, which is something TDT is supposed to be better at (though we don't actually have good fill-in for the general-logical-consequence algorithm TDT is supposed to call).

I expect the ultimate theory to look more like "TDT modded to handle UDT's class of problems and blackmail and anything else we end up throwing at it" than "UDT modded to be naturalistic and etc", but I could be wrong - others have different intuitions about this.

Comment author: defectbot 26 June 2013 12:11:28AM 2 points [-]

UDT doesn't handle non-base-level maximization vantage points (previously "epistemic vantage points") for blackmail - you can blackmail a UDT agent because it assumes your strategy is fixed, and doesn't realize you're only blackmailing it because you're simulating it being blackmailable.

I'm not so sure about this one... It seems that UDT would be deciding "If blackmailed, pay or don't pay" without knowing whether it actually will be blackmailed yet. Assuming it knows the payoffs the other agent receives, it would reason "If a pay if blackmailed...I get blackmailed, whereas if I don't pay if blackmailed...I don't get blackmailed. I therefore should never pay if blackmailed", unless there's something I'm missing.

Comment author: defectbot 21 June 2013 01:14:29AM 3 points [-]

Perhaps we should also ask "Why do we feel we have free will?". The simplest answer, of course, is that we actually do. Albeit, it wouldn't be beyond the scope of human biases to believe that we do if we don't. Ultimately, if we were certain that we couldn't feel more like we have free will than we already do, then we could reduce the question "Do we have free will?" to "Would someone without free will feel any differently than we do?".

Comment author: shminux 04 June 2013 10:07:14PM 1 point [-]

Not sure how this is relevant to the OP, but clearly Omega would defect while making you cooperate, e.g. by convincing you that he is your PD-clone.

Comment author: defectbot 13 June 2013 07:58:02AM 2 points [-]

Superior (or even infinite) computing power does not imply he can make you be persuaded to cooperate, only that he knows whether you will. If there exist any words he could say to convince you to cooperate, he will say them and defect. However, if you cooperate only upon seeing omega cooperate (or prove that he will), he will cooperate.