Comment author: Qiaochu_Yuan 11 July 2013 06:50:05PM *  5 points [-]

Yes, that game. My point is that complaining "that's not fair, X player wasn't playing to win" is a failure to think like reality. You know that you're playing against humans, and that humans do lots of things, including playing games in a way that isn't playing to win. You should be taking that into account when modeling the likely distribution of opponents you're going to face. This is especially true if there isn't a strong incentive to play to win.

Comment author: Quinn 11 July 2013 07:25:00PM 1 point [-]

Yes, and I agree with this. I'm familiar with Straw Vulcanism and accept that guessing incorrectly is my mistake, not others'.

It seems anger and frustration were read into my comment, when in fact I was merely surprised, so I've edited out the offending t-word.

Comment author: Quinn 11 July 2013 08:17:34AM *  0 points [-]

Ken Binmore & Hyun Song Shin. Algorithmic knowledge and game theory. (Chapter 9 of Knowledge, Belief, and Strategic Interaction by Cristina Bicchieri.)

EDIT: Actually, I'd be pretty happy to see any paper containing both the phrases "common knowledge" and "Löb's theorem". This particular paper is probably not the only one.

Comment author: defectbot 10 July 2013 11:40:59PM 27 points [-]

As one of the players who submitted a cooperatebot (yes, I see the irony), allow me to explain my reasoning for doing so. I scoped the comments to see what bots were being suggested (mimicbots, prudentbots, etc) and I saw much more focus on trying to enforce mutual cooperation than trying to exploit bots that can be exploited. I metagamed accordingly, hypothesizing that the other bots would cooperate with a cooperatebot but possibly fail to cooperate with each other. My hypothesis was incorrect, but worth testing IMO.

Comment author: Quinn 11 July 2013 05:41:17AM 3 points [-]

Thank you.

Comment author: Qiaochu_Yuan 10 July 2013 11:37:51PM *  3 points [-]

I think the idea is that someone submitting CooperateBot is not trying to win. But I find this a poor excuse. (It's why I always give the maximum possible number whenever I play "guess 2/3rds of the average.") I agree that the competition should have been seeded with some reasonable default bots.

Submitting DefectBot makes you a CDT agent, not a troll.

Comment author: Quinn 11 July 2013 05:39:59AM -1 points [-]

Can you elaborate on this?

You are right that I used the inflammatory t-word because CooperateBot submitters are probably not trying to win. I certainly expected to see DefectBots (or CliqueBots from optimists), and agree that the competition should have been seeded with both CooperateBots and DefectBots.

But I don't understand this at all:

But I find this a poor excuse. (It's why I always give the maximum possible number whenever I play "guess 2/3rds of the average.")

To me, this looks like t-wording the people who play 0.

Are we thinking of the same game, where the payout is constant?

Comment author: Sniffnoy 10 July 2013 10:16:18PM 6 points [-]

Wait, someone actually submitted CooperateBot? That wasn't just a default entry?

Thoughts on a possible second tournament:

  • I like Luke Somers's suggestion that each matchup should be run several times to smooth out randomness (obviously, bots don't get access to earlier runs, this isn't iterated prisoner's dilemma)
  • Should it include default entries? Perhaps a CooperateBot, a DefectBot, and a RandomBot? Or perhaps just a RandomBot?
  • Should we maybe restrict to a subset of Scheme to make things easier for those of us who want to do static analysis? :) (Yeah, yeah, I never actually submitted anything...) Or should we just rely on pre-announcmets of "If you do this, I'm defecting?"
Comment author: Quinn 10 July 2013 10:48:58PM *  0 points [-]

I was a bit irritated that no fewer than three players submitted CooperateBot, as I cooperated with it in hopes of tricking bots like submission B.

EDIT: More offense was taken at my use of the word "trolls" than was intended.

Comment author: Quinn 30 June 2013 11:05:41PM 2 points [-]

Of course, when you ignore the payoff matrix, the argument is no longer really about the Prisoner's Dilemma (and so I don't know that it's fair to accuse the barber paradox of having a lot of extra cruft not present in the PD formulation).

But I agree that this is a cute presentation of diagonal arguments in culturally relevant terms. A fun read.

Comment author: Qiaochu_Yuan 19 June 2013 12:20:04AM *  2 points [-]

Note that there's a second sense in which the argument is nonconstructive: Kakutani requires as a hypothesis that the space be compact. In your post you say [0, 1]^{\omega} is compact by Tychonoff's theorem, but using Tychonoff's theorem in its general form here is a use of choice. For compact Hausdorff spaces it's enough to use the ultrafilter lemma, but I believe you can just prove by hand that [0, 1]^{\omega} is compact in ZF.

Comment author: Quinn 19 June 2013 02:18:49AM 2 points [-]

I'm glad you bring this up. I have two comments:

(1) The absoluteness argument shows that intermediate uses of Choice don't matter. They happen (definably) in L, and the conclusion reflects back up to V.

(2) As you suspect, you don't really need Choice for compactness of the Hilbert cube. Given a sequence in it, I can define a convergent subsequence by repeatedly pigeonholing infinitely many points into finitely many holes. I believe sequential compactness is what is needed for the Kakutani theorem.

Variants of open cover compactness are derivable from sequential compactness without Choice in Polish spaces, in case something like that is needed. Or if you somehow really need full open cover compactness... well, return to point (1), I guess.

Comment author: Quinn 18 June 2013 11:12:14PM 2 points [-]

I gave this paper a pretty thorough read-through, and decided that the best way to really see what was going on would be to write it all down, expanding the most interesting arguments and collapsing others. This has been my main strategy for learning math for a while now, but I just decided recently that I ought to start a blog for this kind of thing. Although I wrote this post mainly for myself, it might be helpful for anyone else who's interested in reading the paper.

Towards the end, I note that the result can be made "constructive", in the sense of avoiding the Axiom of Choice by absoluteness arguments. Basically by Shoenfield's absoluteness theorem, but I went through the argument by hand because I was not aware of that theorem until I went to Wikipedia to find something to link for absoluteness.

Comment author: shminux 27 March 2013 08:09:58PM 3 points [-]

Another stupid question: would relaxing the schema 3 to incomplete certainty allow the distribution P to be computable? If so, can you get arbitrarily close to certainty and still keep P nice enough to be useful? Does the question even make sense?

Comment author: Quinn 11 June 2013 08:24:48PM 3 points [-]

The computability problem is due to the unbounded ability of P to reason about theorems in the base theory (so that would be a necessary point of relaxation). A computable total function can't even assign values > 1/2 to all the theorems of PA and values < 1/2 to all the sentences refuted by PA.

Comment author: DavidS 06 May 2013 02:58:26PM *  1 point [-]

In the proof of Theorem 2, you write "Clearly is convex." This isn't clear to me; could you explain what I am missing?

More specifically, let be the subset of obeying . So . If were convex, then would be as well.

But is not convex. Project onto in the coordinates corresponding to the sentences and . The image is . This is not convex.

Of course, the fact that the $X(\phi,a,b)$ are not convex doesn't mean that their intersection isn't, but it makes it non-obvious to me why the intersection should be convex. Thanks!

Comment author: Quinn 11 June 2013 07:22:06PM 1 point [-]

The set A is convex because the convex combination (t times one plus (1-t) times the other) of two coherent probability distributions remains a coherent probability distribution. This in turn is because the convex combination of two probability measures over a space of models (cf. definition 1) remains a probability distribution over the space of models.

I think, but am not sure, that your issue is looking at arbitrary points of [0,1]^{L'}, rather than the ones which correspond to probability measures.

View more: Prev | Next