You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lukeprog comments on Steelmanning MIRI critics - Less Wrong Discussion

6 Post author: fowlertm 19 August 2014 03:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 19 August 2014 04:35:45PM 3 points [-]

You admit that friendliness is guaranteed.

Typo?

In order to get that provably friendly thing to work

Again, I think "provably friendly thing" mischaracterizes what MIRI thinks will be possible.

I'm not sure exactly what you're saying in the rest of your comment. Have you read the section on indirect normativity in Superintelligence? I'd start there.

Comment author: shminux 19 August 2014 06:47:05PM 9 points [-]

Given the apparent misconceptions about MIRI's work even among LWers, it seems like you need to write a Main post clarifying what MIRI does and does not claim, and does and does not work on.

Comment author: DanielLC 19 August 2014 11:06:13PM 1 point [-]

Typo?

Fixed.

Again, I think "provably friendly thing" mischaracterizes what MIRI thinks will be possible.

From what I can gather, there's still supposed to be some kind of proof, even if it's just the mathematical kind where you're not really certain because there might be an error in it. The intent is to have some sort of program that maximizes utility function U, and then explicitly write the utility function as something along the lines of "do what I mean".

Have you read the section on indirect normativity in Superintelligence? I'd start there.

I'm not sure what you're referring to. Can you give me a link?

Comment author: Adele_L 20 August 2014 01:45:24AM *  4 points [-]

Superintelligence is a recent book by Nick Bostrom