DanielLC comments on Steelmanning MIRI critics - Less Wrong

6 Post author: fowlertm 19 August 2014 03:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanielLC 19 August 2014 11:06:13PM 1 point [-]

Typo?

Fixed.

Again, I think "provably friendly thing" mischaracterizes what MIRI thinks will be possible.

From what I can gather, there's still supposed to be some kind of proof, even if it's just the mathematical kind where you're not really certain because there might be an error in it. The intent is to have some sort of program that maximizes utility function U, and then explicitly write the utility function as something along the lines of "do what I mean".

Have you read the section on indirect normativity in Superintelligence? I'd start there.

I'm not sure what you're referring to. Can you give me a link?

Comment author: Adele_L 20 August 2014 01:45:24AM *  4 points [-]

Superintelligence is a recent book by Nick Bostrom