Rain comments on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (249)
That's the probability statement in his post. He didn't mention the probability of SIAI's success, and hasn't previously when I've emailed him or asked in public forums, nor has he at any point in time that I've heard. Shortly after I asked, he posted When (Not) To Use Probabilities.
Yes, I had read that, and perhaps even more apropos (from Shut up and do the impossible!):
But it's not clear whether Eliezer means that he can't even translate his intuitive feeling into a word like "small" or "medium". I thought the comment I was replying to was saying that SIAI had a "medium" chance of success, given:
and
But perhaps I misinterpreted? In any case, there's still the question of what is rational for those of us who do think SIAI's chance of success is "small".
Sufficiently-Friendly AI can be hard for SIAI-now but easy or medium for non-SIAI-now (someone else now, someone else future, SIAI future). I personally believe this, since SIAI-now is fucked up (and SIAI-future very well will be too). (I won't substantiate that claim here.) Eliezer didn't talk about SIAI specifically. (He probably thinks SIAI will be at least as likely to succeed as anyone else because he thinks he's super awesome, but it can't be assumed he'd assert that with confidence, I think.)
Will you substantiate it elsewhere?
Second that interest in hearing it substantiated elsewhere.
Your comments are a cruel reminder that I'm in a world where some of the very best people I know are taken from me.
SingInst seems a lot better since I wrote that comment; you and Luke are doing some cool stuff. Around August everything was in a state of disarray and it was unclear if you'd manage to pull through.
I thought he was taking the "don't bother" approach by not giving a probability estimate or arguing about probabilities.
I propose that the rational act is to investigate approaches to greater than human intelligence which would succeed.
This. I'm flabbergasted this isn't pursued further.