You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

shminux comments on FAI PR tracking well [link] - Less Wrong Discussion

7 Post author: Dr_Manhattan 15 August 2014 09:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (7)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 18 August 2014 06:36:56AM 3 points [-]

It's much more likely that I misunderstand something basic about what MIRI does.

Comment author: lukeprog 18 August 2014 06:44:51AM 5 points [-]

Okay, fair enough. To explain briefly:

I disagree with (3) because the Lobian obstacle is just an obstacle to a certain kind of stable self-modification in a particular toy model, and can't say anything about what kinds of safety guarantees you can have for superintelligences in general.

I disagree with (4) because MIRI hasn't shown that there are ways to make a superintelligence 90% or more likely (in a subjective Bayesian sense) to be stably friendly, and I don't expect us to have shown that in another 20 years, and plausibly not ever.

Comment author: shminux 18 August 2014 07:01:50AM 2 points [-]

Thanks! I guess I was unduly optimistic. Comes with being a hopeful but ultimately clueless bystander.