Giles comments on 2012 Less Wrong Census/Survey - Less Wrong

65 Post author: Yvain 03 November 2012 11:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (733)

You are viewing a single comment's thread.

Comment author: Giles 04 November 2012 05:05:50AM 25 points [-]

Minor points on survey phrasing...

P(Global catastrophic risk) should be P(Not Global catastrophic risk)

You say in part 7 that research is allowed, but don't say that research is disallowed in part 8, calibration year.

In the true prisoner's dilemma article, it doesn't appear to give any information about the cognitive algorithms the opponent is running. For this reason I answered noncommittally, and I'm not sure how useful the question is for distinguishing people with CDTish versus TDTish intuitions.

Similarly in torture versus dust specks I answered not sure, not so much due to moral uncertainty but because the problem is underspecified. What's the baseline? Is everybody's life perfect except for the torture or dust specks specified, or is the distribution more like today's world with a broad range of experiences ranging from basically OK to torture?

I might have given an inflated answer for "Hours on the Internet", as I'm on the computer and the computer is on the Internet but it doesn't necessarily mean I'm actively using the Internet at all times.

Comment author: [deleted] 04 November 2012 03:15:00PM *  2 points [-]

In the true prisoner's dilemma article, it doesn't appear to give any information about the cognitive algorithms the opponent is running. For this reason I answered noncommittally, and I'm not sure how useful the question is for distinguishing people with CDTish versus TDTish intuitions.

So did I. Also, in that particular scenario, I'd rather call for a referendum than decide for humanity by myself. I've thought about replacing saving 0/1/2/3 billion lives with receiving 3/2/1/0 kicks in the groin, but that would trigger near-mode thinking in me. Being given 0/500/1000/1500 dollars? Then I would definitely cooperate if I was convinced my opponent's cognitive algorithms aren't too different from mine.

Comment author: Larks 04 November 2012 01:57:03PM 1 point [-]

I assumed it wouldn't be a True prisoners dillemma if the payoff matrix is actually just (C,C) and (D,D), and therefor that my opponent was running some arbitrary not UDT theory.

Comment author: Giles 04 November 2012 04:21:14PM 2 points [-]

I interpreted the word "True" simply to mean that the utility payoffs in the table are correct, and presented in such a way as to prevent people's empathy instinct from causing one player's utility to leak across to the other player.