infotropism comments on The Strangest Thing An AI Could Tell You - Less Wrong

81 Post author: Eliezer_Yudkowsky 15 July 2009 02:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (574)

You are viewing a single comment's thread. Show more comments above.

Comment author: Liron 15 July 2009 05:22:17AM *  41 points [-]

How about this: The process of conscious thought has no causal relationship with human actions. It is a self-contained, useless process that reflects on memories and plans for the future. The plans bear no relationship to future actions, but we deceive ourselves about this after the fact. Behavior is an emergent property that cannot be consciously understood.

I read this post on my phone in the subway, and as I walked back to my apartment thinking of something to post, it felt different because I was suspicious that every experience was a mass self-deception.

Comment author: infotropism 15 July 2009 07:39:09AM *  6 points [-]

Funnily enough, you realize this is quite similar to what you'd need to make Chalmers right, and p-zombies possible, right ?

Comment author: wuwei 16 July 2009 12:44:57AM 3 points [-]

I thought Chalmers is an analytic functionalist about cognition and only reserves his brand of dualism for qualia.