Rain comments on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) - Less Wrong

75 Post author: HoldenKarnofsky 18 August 2011 11:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: Rain 19 August 2011 12:25:37PM *  7 points [-]

I think the creation of smarter-than-human intelligence has a (very) large probability of an (extremely) large impact, and that most of the probability mass there is concentrated into AI

That's the probability statement in his post. He didn't mention the probability of SIAI's success, and hasn't previously when I've emailed him or asked in public forums, nor has he at any point in time that I've heard. Shortly after I asked, he posted When (Not) To Use Probabilities.

Comment author: Wei_Dai 19 August 2011 04:37:19PM 7 points [-]

Yes, I had read that, and perhaps even more apropos (from Shut up and do the impossible!):

You might even be justified in refusing to use probabilities at this point. In all honesty, I really don't know how to estimate the probability of solving an impossible problem that I have gone forth with intent to solve; in a case where I've previously solved some impossible problems, but the particular impossible problem is more difficult than anything I've yet solved, but I plan to work on it longer, etcetera.

People ask me how likely it is that humankind will survive, or how likely it is that anyone can build a Friendly AI, or how likely it is that I can build one. I really don't know how to answer. I'm not being evasive; I don't know how to put a probability estimate on my, or someone else, successfully shutting up and doing the impossible. Is it probability zero because it's impossible? Obviously not. But how likely is it that this problem, like previous ones, will give up its unyielding blankness when I understand it better? It's not truly impossible, I can see that much. But humanly impossible? Impossible to me in particular? I don't know how to guess. I can't even translate my intuitive feeling into a number, because the only intuitive feeling I have is that the "chance" depends heavily on my choices and unknown unknowns: a wildly unstable probability estimate.

But it's not clear whether Eliezer means that he can't even translate his intuitive feeling into a word like "small" or "medium". I thought the comment I was replying to was saying that SIAI had a "medium" chance of success, given:

If you can't argue for a medium probability of a large impact, you shouldn't bother.

and

I don't consider myself to be multiplying small probabilities by large utility intervals at any point in my strategy

But perhaps I misinterpreted? In any case, there's still the question of what is rational for those of us who do think SIAI's chance of success is "small".

Comment author: Rain 19 August 2011 06:02:00PM 2 points [-]

I thought he was taking the "don't bother" approach by not giving a probability estimate or arguing about probabilities.

In any case, there's still the question of what is rational for those of us who do think SIAI's chance of success is "small".

I propose that the rational act is to investigate approaches to greater than human intelligence which would succeed.

Comment author: Jordan 21 August 2011 04:44:28AM 2 points [-]

I propose that the rational act is to investigate approaches to greater than human intelligence which would succeed.

This. I'm flabbergasted this isn't pursued further.

Comment author: Will_Newsome 19 August 2011 06:21:50PM *  2 points [-]

Sufficiently-Friendly AI can be hard for SIAI-now but easy or medium for non-SIAI-now (someone else now, someone else future, SIAI future). I personally believe this, since SIAI-now is fucked up (and SIAI-future very well will be too). (I won't substantiate that claim here.) Eliezer didn't talk about SIAI specifically. (He probably thinks SIAI will be at least as likely to succeed as anyone else because he thinks he's super awesome, but it can't be assumed he'd assert that with confidence, I think.)

Comment author: Alicorn 19 August 2011 06:38:35PM 20 points [-]

SIAI-now is fucked up (and SIAI-future very well will be too). (I won't substantiate that claim here.)

Will you substantiate it elsewhere?

Comment author: handoflixue 19 August 2011 10:25:06PM *  8 points [-]

Second that interest in hearing it substantiated elsewhere.

Comment author: Louie 28 December 2011 08:35:59PM 4 points [-]

Your comments are a cruel reminder that I'm in a world where some of the very best people I know are taken from me.

Comment author: Will_Newsome 28 December 2011 08:38:28PM *  2 points [-]

SingInst seems a lot better since I wrote that comment; you and Luke are doing some cool stuff. Around August everything was in a state of disarray and it was unclear if you'd manage to pull through.