John_Maxwell_IV comments on A Basic Problem of Ethics: Panpsychism? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (16)
Yes, I don't particularly like the way the sequences are written either :/ But I think the kind of thing you're talking about in this post is the sort of topic they address. LW Wiki pages are often better, e.g. see this one:
I see this as compatible with my reply to skeptical_lurker above.
Agreed. I don't have any easy answer to this question. It's kind of like asking the question "if someone is ill or injured, how do you fix them?" It's an important question worthy of extensive study (at least insofar as it's relevant to whatever ethical question you're currently being presented with).
And it's possible that you and I would disagree on how to carve reality in to that which has preferences we consider meaningful vs that which doesn't. Occam's Razor only applies to the territory, not the map, so there's no penalty for us drawing our boundaries in as complicated & intricate a way as we like (kind of like the human-drawn country boundaries on real maps).
I know all about philosophical zombies.
Do you have any answer at all? Or anything to say on the matter? Would you at least agree that it is of critical ethical importance, and hence worthy of discussion?
Of course, but I assume you agree with me about the program I wrote?
In any case, I think it would be nice to try and forge some agreement and/or understanding on this matter (as opposed to ignoring it on the basis of our disagreement).
Regarding modern video game NPCs, I don't think they matter in most cases--I'm moderately less concerned about them than Brian Tomasik is, although I'm also pretty uncertain (and would want to study the way NPCs are typically programmed before making any kind of final judgement).
Yes, that was what I meant to communicate by "Agreed". :)
Having thought about this further, I think I'm more concerned with things that look like qualia than apparent revealed preferences. I don't currently guess it'd be unethical to smash a Roomba or otherwise prevent it from achieving its revealed preferences of cleaning someone's house. I find it more plausible that a reinforcement-learning NPC has quasi-qualia that are worth nonzero moral concern. (BTW, in practice I might act as though things where my modal estimate of their level of value is 0 have some value in order to hedge my bets.)