You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

John_Maxwell_IV comments on A Basic Problem of Ethics: Panpsychism? - Less Wrong Discussion

-4 Post author: capybaralet 27 January 2015 06:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (16)

You are viewing a single comment's thread. Show more comments above.

Comment author: John_Maxwell_IV 28 January 2015 03:41:47AM *  1 point [-]

While I don't have too much experience to back this up, I think it is probably a lot of things I'm familiar with, elaborated at length, with perhaps a few insights sprinkled in.

Yes, I don't particularly like the way the sequences are written either :/ But I think the kind of thing you're talking about in this post is the sort of topic they address. LW Wiki pages are often better, e.g. see this one:

if a p-zombie is atom-by-atom identical to a human being in our universe, then our speech can be explained by the same mechanisms as the zombie's, and yet it would seem awfully peculiar that our words and actions would have one entirely materialistic explanation, but also, furthermore, our universe happens to contain exactly the right bridging law such that our experiences are meaningful and our consciousness syncs up with what our merely physical bodies do. It's too much of a stretch: Occam's razor dictates that we favor a monistic universe with one uniform set of laws.

I see this as compatible with my reply to skeptical_lurker above.

My point is: how do you evaluate if something has preferences? How do you disambiguate preferences from statements like "I prefer __"? Clearly we DO distinguish between these.

Agreed. I don't have any easy answer to this question. It's kind of like asking the question "if someone is ill or injured, how do you fix them?" It's an important question worthy of extensive study (at least insofar as it's relevant to whatever ethical question you're currently being presented with).

And it's possible that you and I would disagree on how to carve reality in to that which has preferences we consider meaningful vs that which doesn't. Occam's Razor only applies to the territory, not the map, so there's no penalty for us drawing our boundaries in as complicated & intricate a way as we like (kind of like the human-drawn country boundaries on real maps).

Comment author: capybaralet 28 January 2015 04:54:05AM *  0 points [-]

I know all about philosophical zombies.

Agreed. I don't have any easy answer to this question.

Do you have any answer at all? Or anything to say on the matter? Would you at least agree that it is of critical ethical importance, and hence worthy of discussion?

And it's possible that you and I would disagree on how to carve reality in to that which has preferences we consider meaningful vs that which doesn't.

Of course, but I assume you agree with me about the program I wrote?

In any case, I think it would be nice to try and forge some agreement and/or understanding on this matter (as opposed to ignoring it on the basis of our disagreement).

Comment author: John_Maxwell_IV 28 January 2015 05:44:58AM *  1 point [-]

Do you have any answer at all? Or anything to say on the matter?

Regarding modern video game NPCs, I don't think they matter in most cases--I'm moderately less concerned about them than Brian Tomasik is, although I'm also pretty uncertain (and would want to study the way NPCs are typically programmed before making any kind of final judgement).

Of course, but I assume you agree with me about the program I wrote?

Yes, that was what I meant to communicate by "Agreed". :)

Having thought about this further, I think I'm more concerned with things that look like qualia than apparent revealed preferences. I don't currently guess it'd be unethical to smash a Roomba or otherwise prevent it from achieving its revealed preferences of cleaning someone's house. I find it more plausible that a reinforcement-learning NPC has quasi-qualia that are worth nonzero moral concern. (BTW, in practice I might act as though things where my modal estimate of their level of value is 0 have some value in order to hedge my bets.)