endoself comments on Causal Reference - Less Wrong

30 Post author: Eliezer_Yudkowsky 20 October 2012 10:12PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (242)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 21 October 2012 01:41:00PM *  3 points [-]

How could qualia cause those memories to become encoded if they were epiphenomenal to brain states?

You have it the wrong way around. In epiphenomenalism, brain states cause qualia, qualia don't cause brain states. When my brain was in a particular past state, the computation of that state produced qualia and also recorded information of having been in that state; and recalling that memory, by emulating the past state, sensibly also produces qualia which are similar to the past state. I can't know for sure that the memory of the experience I have now accurately matches the experience I actually had, of course... but then that problem is hardly unique to epiphenomenalist theories, or even particularly implied by the epiphenomenalist theory.

In general, most of the questions in your comment are valid, but they're general arguments for solipsism or extreme skepticism, not arguments against epiphenomenalism in particular. (And the answer to them is that "consistency is a simpler explanation than some people being p-zombies and some not, or people being p-zombies at certain points of time and not at other points")

Comment author: endoself 21 October 2012 02:39:22PM 1 point [-]

You have it the wrong way around. In epiphenomenalism, brain states cause qualia, qualia don't cause brain states.

If qualia don't cause brain states, what caused the brain state that caused your hands to type this sentence? In order for the actual material brain to represent beliefs about qualia, there has to be an arrow from the qualia to the brain.

Comment author: Kaj_Sotala 21 October 2012 04:45:40PM *  1 point [-]

See my original comment. It's relatively easy (well, at least it is if you accept that we could build conscious AIs in the first place) to construct an explanation of why an information-processing system would behave as if it had qualia and why it would even represent qualia internally. But that only explains why it behaves as if it had qualia, not why it actually has them.

Comment author: endoself 21 October 2012 05:15:31PM *  2 points [-]

I did read that before commenting, but I misinterpreted it, and now I still find myself unable to understand it. The way I read it, it seem to equivocate between knowing something as in representing it in your physical brain and knowing something as in representing it in the 'shadow brain'. You know which one is intended where, but I can't figure it out.

Comment author: Kaj_Sotala 23 October 2012 09:50:37AM 1 point [-]
Comment author: khafra 24 October 2012 01:11:25PM 2 points [-]

Can you describe the qualia associated with going from epiphenominalism to functionalism/physicalism/wherever you went?

Comment author: Kaj_Sotala 26 October 2012 10:46:00AM -1 points [-]

Not entirely sure what you're asking, but nothing too radical. I just thought about it and realized that my model was indeed incoherent about whether or not it presumed the existence of some causal arrows. My philosophy of mind was already functionalist, so I just dropped the epiphenomenalist component from it.

A bigger impact was that I'll need to rethink some parts of my model of personal identity, but I haven't gotten around that yet.