Comment author: Eliezer_Yudkowsky 05 September 2008 12:23:51AM 6 points [-]

Eliezer: the rationality of defection in these finitely repeated games has come under some fire, and there's a HUGE literature on it. Reading some of the more prominent examples may help you sort out your position on it.

My position is already sorted, I assure you. I cooperate with the Paperclipper if I think it will one-box on Newcomb's Problem with myself as Omega.

As Paul says, this is very well trodden ground. Since it hasn't been assumed that we are sure we know how the other party reasons, we might want to invest some early rounds in probing to see how the party thinks.

As someone who rejects defection as the inevitable rational solution to both the one-shot PD and the iterated PD, I'm interested in the inconsistency of those who accept defection as the rational equilibrium in the one-shot PD, but find excuses to reject it in the finitely iterated known-horizon PD.

True, the iteration does present the possibility of "exploiting" an "irrational" opponent whose "irrationality" you can probe and detect, if there's any doubt about it in your mind. But that doesn't resolve the fundamental issue of rationality; it's like saying that you'll one-box on Newcomb's Problem if you think there's even a slight chance that Omega is hanging around and will secretly manipulate box B after you make your choice. What if neither party to the IPD thinks there's a realistic chance that the other party is stupid - if they're both superintelligences, say? Do they automatically defect against each other for 100 rounds?

And are you really "exploiting" an "irrational" opponent, if the party "exploited" ends up better off? Wouldn't you end up wishing you were stupider, so you could be exploited - wishing to be unilaterally stupider, regardless of the other party's intelligence? Hence the phrase "regret of rationality"...

Comment author: MathieuRoy 05 February 2014 12:07:27PM *  1 point [-]

Do you mean "I cooperate with the Paperclipper if AND ONLY IF I think it will one-box on Newcomb's Problem with myself as Omega AND I think it thinks I'm Omega AND I think it thinks I think it thinks I'm Omega, etc." ? This seems to require an infinite amount of knowledge, no?

Edit: and you said "We have never interacted with the paperclip maximizer before", so do you think it would one-box?

Comment author: MathieuRoy 04 February 2014 05:20:16AM 1 point [-]

Would a (hypothetically) pure altruist have children (in our current situation)?

In response to comment by dclayh on Closet survey #1
Comment author: woodside 13 January 2013 05:35:15PM *  8 points [-]

I'm curious about your personal experiences with physical pain. What is the most painful thing you've experienced and what was the duration?

I'm sympathetic to your preference in the abstract, I just think you might be surprised at how little pain you're actually willing to endure once it's happening (not a slight against you, I think people in general overestimate what degree of physical pain they can handle as a function of the stakes involved, based largely on anecdotal and second hand experience from my time in the military).

At the risk of being overly morbid, I have high confidence (>95%) that I could have you begging for death inside of an hour if that were my goal (don't worry, it's certainly not). An unfriendly AI capable of keeping you alive for eternity just to torture you would be capable of making you experience worse pain than anyone ever has in the history of our species so far. I believe you that you might sign a piece of paper to pre-commit to an eternity of torture vice simple death. I just think you'd be very very upset about that decision. Probably less than 5 minutes into it.

In response to comment by woodside on Closet survey #1
Comment author: MathieuRoy 03 February 2014 12:34:19PM 0 points [-]

I would definitely pre-commit to immortality.

Comment author: MathieuRoy 03 February 2014 11:23:49AM 1 point [-]

So D "wins" the bid, and B pays him $15 to go get the kids from their grandma's.

Shouldn't it be more something like 15+(100-15)/2$? So both win (about) the same amount of utility? Otherwise, the one who was ready to pay 100$ saved ("won") 85$ and the other won nothing (s/he was indifferent to pay or do it for 15$).

Nice post by the way. Such techniques seem useful if you trust the other will make a bid that really represents the amount s/he's ready to pay.

Comment author: MathieuRoy 02 February 2014 07:20:31AM *  0 points [-]

If doubting is/was accepted in our current society, and we wanted Archimede to doubt about his beliefs, would we have to doubt about the value doubting, or be certain about the value of doubting?

It's a joke. As Eliezer said "to get nonobvious output, you need nonobvious input", so obviously, we'd just have to find something nonobvious. :-)

I wonder if we will ever come up with something that is as nonobvious to us right now as bayesian thinking was to Archimede.

Comment author: Peter_de_Blanc 24 March 2007 01:21:27AM 20 points [-]

You should ask the greatest mathematician of the ancient world to work on FAI theory. If he solves the analogous problem, then when he explains his solution to you over the Chronophone, it'll come out on your end as a design for an AI.

Comment author: MathieuRoy 02 February 2014 06:40:00AM 4 points [-]

Nice :-) but in fact, the chronophone will transmit a problem just as hard for Archimede as FAI is to us. So he'll probably solve the problem in the same amount of time than us (so it won't help us). I wonder what would be this problem? Is going do the Moon for Archimede just as hard than building a FAI for us?

In response to Beautiful Math
Comment author: MathieuRoy 28 January 2014 05:19:47AM 1 point [-]

Eliezer sometime ask something that I would now like to ask him: how would the world looks like if mathematics didn't precede mathematicians? And if it did?

Comment author: Eugine_Nier 26 January 2014 09:08:02PM 0 points [-]

But why would your distribution be two deltas at 10^-4 and 10^-16 and not more continuous?

Because it's a toy example and it's easier to work out the math this way. You can get similar results with more continuous distributions, the math is simply more complicated.

Comment author: MathieuRoy 26 January 2014 09:13:25PM *  1 point [-]

Ok right. I agree.

Comment author: Wes_W 26 January 2014 05:11:28PM *  4 points [-]

I don't think I'm rationalizing an answer; I'm not even presenting an answer. I meant only to present a (very simplified) example of how such a conclusion might arise.

I'm totally willing to chalk the survey results up to scale insensitivity, but such results aren't necessarily nonsensical. It could just mean somebody started with "what credence do I assign that aliens exist and the Fermi Paradox is/isn't an illusion" and worked backwards from there, rather than pulling a number out of thin air for "chance of life developing in a single galaxy" and then exponentiating.

Since the latter method gives sharply differing results depending on whether you make up a probability a few orders of magnitude above or below 10^-11, I'm not sure working backwards is even a worse idea. At least working backwards won't give one 99.99999% credence in something merely because their brain is bad at intuitively telling apart 10^-8 and 10^-14.

Edit: I think some degree of dichotomy is plausible here. A lot of intermediate estimates are ruled out by us not seeing aliens everywhere.

In response to comment by Wes_W on 2013 Survey Results
Comment author: MathieuRoy 26 January 2014 09:09:23PM 2 points [-]

Sorry I misunderstood. (Oops) I agree (see my edits in the previous comment). A justify dichotomy is more probable than I initially thought, and probably less people made a scale insensitivity bias than I initially thought.

Comment author: Vaniver 22 January 2014 06:43:21PM 3 points [-]

Would some people be interested in answering 10 such questions and give their confidence about their answer every month? That would provide better statistics and a way to see if we're improving.

There's both PredictionBook and the Good Judgment Project as venues for this sort of thing.

Comment author: MathieuRoy 26 January 2014 05:06:45PM *  2 points [-]

Thank you.

EDIT: I just made my first (meta)prediction which is that I'm 50% sure that "I will make good predictions in 2014. (ie. 40 to 60% of my predictions with an estimate between 40 and 60% will be true.)"

View more: Prev | Next