You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

fubarobfusco comments on Friendly AI and the limits of computational epistemology - Less Wrong Discussion

18 Post author: Mitchell_Porter 08 August 2012 01:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (146)

You are viewing a single comment's thread. Show more comments above.

Comment author: fubarobfusco 08 August 2012 07:32:57PM 2 points [-]

I'm hearing an invocation of the Anti-Zombie Principle here, i.e.: "If simulations of human philosophers of mind will talk about consciousness, they will do so for the same reasons that human philosophers do, namely, that they actually have consciousness to talk about" ...

Comment author: CarlShulman 08 August 2012 08:18:30PM 1 point [-]

I'm hearing an invocation of the Anti-Zombie Principle here, i.e.: "If simulations of human philosophers of mind will talk about consciousness, they will do so for the same reasons that human philosophers do,

Yes.

namely, that they actually have consciousness to talk about" ...

Not necessarily, in the mystical sense.

Comment author: fubarobfusco 08 August 2012 08:46:30PM *  3 points [-]

Okay, to clarify: If 'consciousness' refers to anything, it refers to something possessed both by human philosophers and accurate simulations of human philosophers. So one of the following must be true: ① human philosophers can't be accurately simulated, ② simulated human philosophers have consciousness, or ③ 'consciousness' doesn't refer to anything.

Comment author: CarlShulman 08 August 2012 09:02:01PM 1 point [-]

Dualists needn't grant your first sentence, claiming epiphenomena. I am talking about whether mystical mind features would screw up the ability of an AI to carry out our aims, not arguing for physicalism (here).