Stuart_Armstrong comments on An overall schema for the friendly AI problems: self-referential convergence criteria - Less Wrong

17 Post author: Stuart_Armstrong 13 July 2015 03:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (110)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 28 July 2015 10:23:55AM 1 point [-]

Can we assume, that since I've been working all this time on AI safety, that I'm not an idiot? When presenting a scenario ("assume AI contained, and truthful") I'm investigating whether we have safety within the terms of that scenario. Which here we don't, so we can reject attempts aimed at that scenario without looking further. If/when we find a safe way to do that within the scenario, then we can investigate whether that scenario is achievable in the first place.

Comment author: [deleted] 30 July 2015 03:56:35PM 0 points [-]

Ah. Then here's the difference in assumptions: I don't believe a contained, truthful UFAI is safe in the first place. I just have an incredibly low prior on that. So low, in fact, that I didn't think anyone would take it seriously enough to imagine scenarios which prove it's unsafe, because it's just so bloody obvious that you do not build UFAI for any reason, because it will go wrong in some way you didn't plan for.

Comment author: Stuart_Armstrong 31 July 2015 08:34:53AM 0 points [-]

See the point on Paul Christiano's design. The problem I discussed applies not only to UFAIs but to other designs that seek to get round it, but use potentially unrestricted search.