That's some excellent steelmanning. I would also add that creating animals for food with lives barely worth living is better than not creating them at all, from a utilitarian (if repugnant) point of view. And it's not clear whether a farm chicken's life is below that threshold.
Robin Hanson has advocated this point of view.
I find the argument quite unconvincing; Hanson seems to be making the mistake of conflating "life worth living" with "not committing suicide" that is well addressed in MTGandP's reply (and grandchildren).
The 52-karma top comment of the Virtual Employment thread has been deleted. I gather that it said something about copywriting, with online skill tests for prospective applicants.
Can anyone provide a bit more information about this apparently quite valuable comment?
Awesome, thanks!
Quinn, thank you for doing this! I just looked through the first post and it's very nice and clear. Maybe Patrick and Paul can comment on the other two.
Thanks for the nice comment. I listed the PD post first, as it is probably the most readable of the three, written more like an article than like notes.
Sorry, can't get it. There's a Google Books version you might be able to use, but the UWash access is only to a physical copy.
As for your edit, well,
- http://www.google.com/search?q=%22common+knowledge%22+AND+%28%22L%C3%B6b%27s+theorem%22+OR+%22Loeb%27s+theorem%22+OR+%22Lob%27s+theorem%22%29
- http://scholar.google.com/scholar?q=%22common+knowledge%22+AND+%28%22L%C3%B6b%27s+theorem%22+OR+%22Loeb%27s+theorem%22+OR+%22Lob%27s+theorem%22%29
turn up some things that might be useful.
Thanks for looking! I'll try to get my hands on a physical copy, as the Google Books version has highly distracting page omissions.
Quinn appears to have submitted something similar. As far as I can tell, against CooperateBot it cooperates; otherwise, it waits 9 seconds before defecting. (The weird timing conditional in there should never return true if things are working properly, and checking the results, indeed, it defected against everything but the three CooperateBots.)
Unless I'm going insane, LukeASomers' comment below is incorrect. If you defect and your opponent times out, you still get 1 point and they get 0, which is marginally better than both getting 1 point in the case of mutual defection.
That was the purpose my (sleep 9). I figured anyone who was going to eval me twice against anything other than CooperateBot is going to figure out that I'm a jerk and defect against me, so I'm only getting one point from them anyway. Might as well lower their score by 1.
My assumption might not have been correct though. In the original scoreboard, T (who times out against me) actually does cooperate with K, who defected against everybody!
The bizarre-looking always-false conditional was a long shot at detecting simulation. I heard an idea at a LW meetup (maybe from So8res?) that players might remove sleeps from each other's code before calling eval. I figured I might as well fake such players out if there were any (there were not).
EDIT: More offense was taken at my use of the word "trolls" than was intended.
Be sure to distinguish the "think you're wrong" and the "find it offensive" components of the responses.
Yes, the "here's why Quinn is wrong about CooperateBot being a troll submission" comments were valuable, so I don't regret provoking them. Presumably if my comment had said "players" instead of "trolls" from the outset, it would have been ignored for being inoffensive and content-free.
But provoking a few good comments was a happy accident from my perspective. I will avoid casual use of the word "troll" on this site, unless I have a specific purpose in mind.
Yes, that game. My point is that complaining "that's not fair, X player wasn't playing to win" is a failure to think like reality. You know that you're playing against humans, and that humans do lots of things, including playing games in a way that isn't playing to win. You should be taking that into account when modeling the likely distribution of opponents you're going to face. This is especially true if there isn't a strong incentive to play to win.
Ken Binmore & Hyun Song Shin. Algorithmic knowledge and game theory. (Chapter 9 of Knowledge, Belief, and Strategic Interaction by Cristina Bicchieri.)
EDIT: Actually, I'd be pretty happy to see any paper containing both the phrases "common knowledge" and "Löb's theorem". This particular paper is probably not the only one.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Martin Gardner's Mathematical Games column from Scientific American Volume 242, Number 6, June, 1980. Paywalled here.
EDIT: escape characters