Liron comments on The Strangest Thing An AI Could Tell You - Less Wrong

81 Post author: Eliezer_Yudkowsky 15 July 2009 02:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (574)

You are viewing a single comment's thread.

Comment author: Liron 15 July 2009 05:22:17AM *  41 points [-]

How about this: The process of conscious thought has no causal relationship with human actions. It is a self-contained, useless process that reflects on memories and plans for the future. The plans bear no relationship to future actions, but we deceive ourselves about this after the fact. Behavior is an emergent property that cannot be consciously understood.

I read this post on my phone in the subway, and as I walked back to my apartment thinking of something to post, it felt different because I was suspicious that every experience was a mass self-deception.

Comment author: huono_ekonomi 15 July 2009 08:14:29AM 13 points [-]

Or, rather, the causal relationship is reverse: action causes conscious thought (rationalization).

Once you start looking for it, you can see evidence for this in many places. Quite a few neuroscientists have adopted this view.

Comment author: infotropism 15 July 2009 07:39:09AM *  6 points [-]

Funnily enough, you realize this is quite similar to what you'd need to make Chalmers right, and p-zombies possible, right ?

Comment author: wuwei 16 July 2009 12:44:57AM 3 points [-]

I thought Chalmers is an analytic functionalist about cognition and only reserves his brand of dualism for qualia.

Comment author: DanielLC 18 October 2010 05:01:18AM 1 point [-]

This summarizes my view on qualia. I find that far more disturbing than what you said.

Comment author: patrissimo 29 September 2010 12:58:08PM 1 point [-]

I don't think this is 100% true, but I think it's...oh, at least 20% true, perhaps much more. I think my mechanism for predicting the future impact of my present conscious thoughts is flawed (ie Stumbling on Happiness, or any consistent mis-prediction about self, like # of drinks consumed, junk food eaten, time it will take to complete a project, etc.) But I don't think it's pure rationalization, and the further an activity is from primal drives (sex, food) the more likely I am to successfully predict it, and so I think the more my conscious thought really matters.

One thing that really helps me have more belief in my conscious action is that rationalizations are not perfect - with time & practice, you can catch yourself in them. And they are not evenly distributed by action type. Sure I might have some hidden rationalizations (around death, my own abilities, other things that make me very uncomfortable), but there's just no way that all of the types of action I engage in have hidden rationalizations, such that my conscious model/predict/observe/revise process is flawed about everything.