You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

John_Maxwell_IV comments on A Basic Problem of Ethics: Panpsychism? - Less Wrong Discussion

-4 Post author: capybaralet 27 January 2015 06:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (16)

You are viewing a single comment's thread.

Comment author: John_Maxwell_IV 27 January 2015 08:13:45AM *  5 points [-]

Panpsychism seems like a plausible theory of consciousness.

What's the best argument you've seen for it? I don't find it plausible myself.

What do you think of the reductionism sequence? E.g. this post might be relevant.

My interpretation of the "expanding circle" might be something like: it'd be a good thing if things with preferences increasingly found themselves preferring that the preferences of other things with preferences were also achieved. If something doesn't have preferences, I'm not that concerned about it.

Comment author: skeptical_lurker 27 January 2015 02:53:24PM 3 points [-]

If some things are concious and some aren't, then there has to be some rule differentiating the two, which give a complexity penalty. The ideas that everything is concious or nothing is concious are strictly simpler and thus preferable by the info-theoretic version of occam's razor, and I can rule out the second possibility because I know that I am concious.

If something doesn't have preferences, I'm not that concerned about it.

Suppose a taoist claims not to have any preferences. Should you be concerned about them?

Comment author: John_Maxwell_IV 28 January 2015 03:30:57AM 1 point [-]

If some things are concious and some aren't, then there has to be some rule differentiating the two, which give a complexity penalty.

If some things are computers and some aren't, then there has to be some rule differentiating the two, which gives a complexity penalty. Thus by occam's razor everything is likely a computer. Does this argument work?

My position: consciousness is in the map, not the territory.

Suppose a taoist claims not to have any preferences. Should you be concerned about them?

Maybe, because humans often lie or have inaccurate self-knowledge.

Comment author: skeptical_lurker 28 January 2015 09:31:08AM 0 points [-]

If some things are computers and some aren't, then there has to be some rule differentiating the two, which gives a complexity penalty. Thus by occam's razor everything is likely a computer. Does this argument work?

Well, you can turn some unusual things into computers - billiard tables, model trains and the game of life are all turning-complete.

But I do have a good understanding of what a computer is. Perhaps my occam's razor based prior is that everything is a computer, but then I observe that I can't use most objects to compute anything, so I update and conclude that most things aren't computers.

Similarly, I can observe that most things do not have emotions, or preferences, or agency, or self-awareness. I can put a mirror in front of an animal and conclude that most animals aren't self-aware.

But is there any test I can perform to determine whether something experiences qualia?

Maybe, because humans often lie or have inaccurate self-knowledge.

Ok, suppose neuroscientists find a person who has not preferences for solid scientific reasons - perhaps the prefrontal cortex has been lesioned, or maybe they have no dopamine in their brain. Should you care about this person?

Comment author: capybaralet 28 January 2015 03:18:57AM *  0 points [-]

What's the best argument you've seen for it?

see skeptical lurker's comment, below.

What do you think of the reductionism sequence?

While I don't have too much experience to back this up, I think it is probably a lot of things I'm familiar with, elaborated at length, with perhaps a few insights sprinkled in. Can you please give brief summaries of the things you link to, and how they are relevant? I skimmed that article, and it doesn't seem relevant.

If something doesn't have preferences, I'm not that concerned about it.

My point is: how do you evaluate if something has preferences? How do you disambiguate preferences from statements like "I prefer __"? Clearly we DO distinguish between these. If I write and run the following computer program, I don't think you will be upset if I stop it :

while True:
print "I prefer not to be interrupted"
Comment author: John_Maxwell_IV 28 January 2015 03:41:47AM *  1 point [-]

While I don't have too much experience to back this up, I think it is probably a lot of things I'm familiar with, elaborated at length, with perhaps a few insights sprinkled in.

Yes, I don't particularly like the way the sequences are written either :/ But I think the kind of thing you're talking about in this post is the sort of topic they address. LW Wiki pages are often better, e.g. see this one:

if a p-zombie is atom-by-atom identical to a human being in our universe, then our speech can be explained by the same mechanisms as the zombie's, and yet it would seem awfully peculiar that our words and actions would have one entirely materialistic explanation, but also, furthermore, our universe happens to contain exactly the right bridging law such that our experiences are meaningful and our consciousness syncs up with what our merely physical bodies do. It's too much of a stretch: Occam's razor dictates that we favor a monistic universe with one uniform set of laws.

I see this as compatible with my reply to skeptical_lurker above.

My point is: how do you evaluate if something has preferences? How do you disambiguate preferences from statements like "I prefer __"? Clearly we DO distinguish between these.

Agreed. I don't have any easy answer to this question. It's kind of like asking the question "if someone is ill or injured, how do you fix them?" It's an important question worthy of extensive study (at least insofar as it's relevant to whatever ethical question you're currently being presented with).

And it's possible that you and I would disagree on how to carve reality in to that which has preferences we consider meaningful vs that which doesn't. Occam's Razor only applies to the territory, not the map, so there's no penalty for us drawing our boundaries in as complicated & intricate a way as we like (kind of like the human-drawn country boundaries on real maps).

Comment author: capybaralet 28 January 2015 04:54:05AM *  0 points [-]

I know all about philosophical zombies.

Agreed. I don't have any easy answer to this question.

Do you have any answer at all? Or anything to say on the matter? Would you at least agree that it is of critical ethical importance, and hence worthy of discussion?

And it's possible that you and I would disagree on how to carve reality in to that which has preferences we consider meaningful vs that which doesn't.

Of course, but I assume you agree with me about the program I wrote?

In any case, I think it would be nice to try and forge some agreement and/or understanding on this matter (as opposed to ignoring it on the basis of our disagreement).

Comment author: John_Maxwell_IV 28 January 2015 05:44:58AM *  1 point [-]

Do you have any answer at all? Or anything to say on the matter?

Regarding modern video game NPCs, I don't think they matter in most cases--I'm moderately less concerned about them than Brian Tomasik is, although I'm also pretty uncertain (and would want to study the way NPCs are typically programmed before making any kind of final judgement).

Of course, but I assume you agree with me about the program I wrote?

Yes, that was what I meant to communicate by "Agreed". :)

Having thought about this further, I think I'm more concerned with things that look like qualia than apparent revealed preferences. I don't currently guess it'd be unethical to smash a Roomba or otherwise prevent it from achieving its revealed preferences of cleaning someone's house. I find it more plausible that a reinforcement-learning NPC has quasi-qualia that are worth nonzero moral concern. (BTW, in practice I might act as though things where my modal estimate of their level of value is 0 have some value in order to hedge my bets.)