Will_Newsome comments on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) - Less Wrong

75 Post author: HoldenKarnofsky 18 August 2011 11:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 19 August 2011 06:21:50PM *  2 points [-]

Sufficiently-Friendly AI can be hard for SIAI-now but easy or medium for non-SIAI-now (someone else now, someone else future, SIAI future). I personally believe this, since SIAI-now is fucked up (and SIAI-future very well will be too). (I won't substantiate that claim here.) Eliezer didn't talk about SIAI specifically. (He probably thinks SIAI will be at least as likely to succeed as anyone else because he thinks he's super awesome, but it can't be assumed he'd assert that with confidence, I think.)

Comment author: Alicorn 19 August 2011 06:38:35PM 20 points [-]

SIAI-now is fucked up (and SIAI-future very well will be too). (I won't substantiate that claim here.)

Will you substantiate it elsewhere?

Comment author: handoflixue 19 August 2011 10:25:06PM *  8 points [-]

Second that interest in hearing it substantiated elsewhere.

Comment author: Louie 28 December 2011 08:35:59PM 4 points [-]

Your comments are a cruel reminder that I'm in a world where some of the very best people I know are taken from me.

Comment author: Will_Newsome 28 December 2011 08:38:28PM *  2 points [-]

SingInst seems a lot better since I wrote that comment; you and Luke are doing some cool stuff. Around August everything was in a state of disarray and it was unclear if you'd manage to pull through.