You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Risto_Saarelma comments on Friendly AI and the limits of computational epistemology - Less Wrong Discussion

18 Post author: Mitchell_Porter 08 August 2012 01:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (146)

You are viewing a single comment's thread. Show more comments above.

Comment author: Risto_Saarelma 09 August 2012 07:48:00AM 2 points [-]

So if you accidentally cut the top of your head open while shaving and discovered that someone had went and replaced your brain with a high-end classical computing CPU sometime while you were sleeping, you couldn't accept actually being an upload, since the causal structure that produces the thoughts that you are having qualia are still there? (I suppose you might object to the assumed-to-be-zombie upload you being referred to as 'you' as well.)

Reason I'm asking is that I'm a bit confused exactly where the problems from just the philosophical part would come in with the outsourcing to uploaded researchers scenario. Some kind of more concrete prediction, like that a neuromorphic AI architecturally isomorphic to a real human central nervous system just plain won't ever run as intended until you build an quantum octonion monad CPU to house the qualia bit, would be a lot more not-confusing stance, but I don't think I've seen you take that.