You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

JQuinton comments on Friendly AI and the limits of computational epistemology - Less Wrong Discussion

18 Post author: Mitchell_Porter 08 August 2012 01:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (146)

You are viewing a single comment's thread.

Comment author: JQuinton 09 August 2012 10:19:47PM 1 point [-]

One question I had reading this is: What does it matter if our model of human consciousness is wrong? If we create FAI that has all of the outward functionality of consciousness I still would consider that a win. Not all eyes that have evolved are human eyes; the same could happen with consciousness. If we manufactured some mechanical "eye" that didn't model exactly the interior bits of a human eye but was still able to do what eyes do, shouldn't we still consider this an eye? It would seem nonsensical to me to question whether this mechanical eye "really" sees because the act of seeing is a subjective experience that can't be truly modeled or say that it's not "real" seeing because it's computational seeing.