You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gwern comments on Computation Hazards - Less Wrong Discussion

14 Post author: Alex_Altair 13 June 2012 09:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 14 June 2012 08:50:47PM 1 point [-]

Simulated humans are not arbitrary Turing machines.

Arbitrary Turing machines are arbitrary simulated humans. If you want to cut the knot with a 'human' predicate, that's just as undecidable.

Which also means that if you can prove that human suffering is non-computable, you basically prove that FAI is impossible.

There we have more strategies. For example, 'prevent any current human from suffering or creating another human which might then suffer'.

Analogous to pain asymbolia, it should be possible to modify the simulated human to report (and possibly block) potential "suffering" without feeling it.

Is there a way to do this perfectly without running into undecidability? Even if you had the method, how would you know when to apply it...