lmm comments on Can science come to understand consciousness? A problem of philosophical zombies (Yes, I know, P-zombies again.) - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (41)
If you can't express the question then you can't be confident other people understand you either. Remember that some people just don't have e.g. visual imagination, and don't realise there's anything unusual about them.
Now I'm wondering whether I'm conscious, in your sense. I mean, I feel emotion, but it seems to adequately correspond to your "automaton" version. I experience what I assume is consciousness, but it seems to me that that's just how a sufficiently advanced self-monitoring system would feel from the inside.
Yes. I'm wondering if these dispute simply resolve to having different subjective experiences of what it means to be alive. In fact maybe the mistake is assuming that p-zombies don't exist. Maybe some humans are p-zombies!
However,
seems like almost a contradiction in terms. Can a self monitoring system become sufficiently advanced without feeling anything (just as my computer computes, but I suppose, doesn't feel)?
I think not. But I think that makes it entirely unsurprising, obvious even, that a more advanced computer would feel.
If so, I want to know why.
Because it seems like the most plausible explanation for the fact that I feel, to the extent that I do. (also it explains the otherwise quite confusing result that our decision-making processes activate after we've acted for many kinds of actions, even though we feel like our decision determined the action).
I don't know what that second thing has to do with consciousness.