Have you read A Human's Guide to Words?
If you have: taboo "consciousness," because the definition itself is what you are uncertain about. What is it that you care if atoms have?
In my opinion, it looks like there is no physical property at stake here - you are instead worried about if atoms have you-care-about-them substance. Which is of course not an actual substance in the atoms - your question ultimately points back to your own ideas and preferences.
This is your own prerogative, but I think if you try to let go of the question of "consciousness" for now until you have a better idea of what physical properties correspond to it, and just ask yourself if you care about atoms for their own sake, you probably do not, and thus can stop worrying about it.
This assumes that I can't care about something without first giving it a physical definition. But this is not true. I care about myself, and I care about myself even if I do not yet know the physical properties that define me. So I can also say "I care about atoms for their own sake if they are sufficiently similar to me", even if I do not yet know what it would take for them to be sufficiently similar to me.
It's not that one has to have their one true definition all laid out before any progress can be made.
But if one cannot make any physical predictions if all atoms are "fleem" rather than not fleem, it's usually better to take a break from worrying about whether atoms are fleem. You might just be getting sidetracked by the ol' blegg/rube problem.
Unless I am mistaken, the best theory of how to make FAI ultimately points back to my (and all y'all's) ideas and preferences.
So I guess we should taboo FAI.
I'd argue that you have no better idea of what physical properties correspond to consciousness than I do, you've simply chosen to ignore the question, because you believe you can rely on your own intuitive consciousness-detector.
I am worried about bias. Shouldn't we all be?
Panpsychism seems like a plausible theory of consciousness.
What's the best argument you've seen for it? I don't find it plausible myself.
What do you think of the reductionism sequence? E.g. this post might be relevant.
My interpretation of the "expanding circle" might be something like: it'd be a good thing if things with preferences increasingly found themselves preferring that the preferences of other things with preferences were also achieved. If something doesn't have preferences, I'm not that concerned about it.
If some things are concious and some aren't, then there has to be some rule differentiating the two, which give a complexity penalty. The ideas that everything is concious or nothing is concious are strictly simpler and thus preferable by the info-theoretic version of occam's razor, and I can rule out the second possibility because I know that I am concious.
If something doesn't have preferences, I'm not that concerned about it.
Suppose a taoist claims not to have any preferences. Should you be concerned about them?
If some things are concious and some aren't, then there has to be some rule differentiating the two, which give a complexity penalty.
If some things are computers and some aren't, then there has to be some rule differentiating the two, which gives a complexity penalty. Thus by occam's razor everything is likely a computer. Does this argument work?
My position: consciousness is in the map, not the territory.
Suppose a taoist claims not to have any preferences. Should you be concerned about them?
Maybe, because humans often lie or have inaccurate self-knowledge.
If some things are computers and some aren't, then there has to be some rule differentiating the two, which gives a complexity penalty. Thus by occam's razor everything is likely a computer. Does this argument work?
Well, you can turn some unusual things into computers - billiard tables, model trains and the game of life are all turning-complete.
But I do have a good understanding of what a computer is. Perhaps my occam's razor based prior is that everything is a computer, but then I observe that I can't use most objects to compute anything, so I update and conclude that most things aren't computers.
Similarly, I can observe that most things do not have emotions, or preferences, or agency, or self-awareness. I can put a mirror in front of an animal and conclude that most animals aren't self-aware.
But is there any test I can perform to determine whether something experiences qualia?
Maybe, because humans often lie or have inaccurate self-knowledge.
Ok, suppose neuroscientists find a person who has not preferences for solid scientific reasons - perhaps the prefrontal cortex has been lesioned, or maybe they have no dopamine in their brain. Should you care about this person?
What's the best argument you've seen for it?
see skeptical lurker's comment, below.
What do you think of the reductionism sequence?
While I don't have too much experience to back this up, I think it is probably a lot of things I'm familiar with, elaborated at length, with perhaps a few insights sprinkled in. Can you please give brief summaries of the things you link to, and how they are relevant? I skimmed that article, and it doesn't seem relevant.
If something doesn't have preferences, I'm not that concerned about it.
My point is: how do you evaluate if something has preferences? How do you disambiguate preferences from statements like "I prefer __"? Clearly we DO distinguish between these. If I write and run the following computer program, I don't think you will be upset if I stop it :
while True:
print "I prefer not to be interrupted"
While I don't have too much experience to back this up, I think it is probably a lot of things I'm familiar with, elaborated at length, with perhaps a few insights sprinkled in.
Yes, I don't particularly like the way the sequences are written either :/ But I think the kind of thing you're talking about in this post is the sort of topic they address. LW Wiki pages are often better, e.g. see this one:
if a p-zombie is atom-by-atom identical to a human being in our universe, then our speech can be explained by the same mechanisms as the zombie's, and yet it would seem awfully peculiar that our words and actions would have one entirely materialistic explanation, but also, furthermore, our universe happens to contain exactly the right bridging law such that our experiences are meaningful and our consciousness syncs up with what our merely physical bodies do. It's too much of a stretch: Occam's razor dictates that we favor a monistic universe with one uniform set of laws.
I see this as compatible with my reply to skeptical_lurker above.
My point is: how do you evaluate if something has preferences? How do you disambiguate preferences from statements like "I prefer __"? Clearly we DO distinguish between these.
Agreed. I don't have any easy answer to this question. It's kind of like asking the question "if someone is ill or injured, how do you fix them?" It's an important question worthy of extensive study (at least insofar as it's relevant to whatever ethical question you're currently being presented with).
And it's possible that you and I would disagree on how to carve reality in to that which has preferences we consider meaningful vs that which doesn't. Occam's Razor only applies to the territory, not the map, so there's no penalty for us drawing our boundaries in as complicated & intricate a way as we like (kind of like the human-drawn country boundaries on real maps).
I know all about philosophical zombies.
Agreed. I don't have any easy answer to this question.
Do you have any answer at all? Or anything to say on the matter? Would you at least agree that it is of critical ethical importance, and hence worthy of discussion?
And it's possible that you and I would disagree on how to carve reality in to that which has preferences we consider meaningful vs that which doesn't.
Of course, but I assume you agree with me about the program I wrote?
In any case, I think it would be nice to try and forge some agreement and/or understanding on this matter (as opposed to ignoring it on the basis of our disagreement).
Do you have any answer at all? Or anything to say on the matter?
Regarding modern video game NPCs, I don't think they matter in most cases--I'm moderately less concerned about them than Brian Tomasik is, although I'm also pretty uncertain (and would want to study the way NPCs are typically programmed before making any kind of final judgement).
Of course, but I assume you agree with me about the program I wrote?
Yes, that was what I meant to communicate by "Agreed". :)
Having thought about this further, I think I'm more concerned with things that look like qualia than apparent revealed preferences. I don't currently guess it'd be unethical to smash a Roomba or otherwise prevent it from achieving its revealed preferences of cleaning someone's house. I find it more plausible that a reinforcement-learning NPC has quasi-qualia that are worth nonzero moral concern. (BTW, in practice I might act as though things where my modal estimate of their level of value is 0 have some value in order to hedge my bets.)
Suppose all information processing is inextricably linked to qualia. Now I suppose there is information processing in rocks of a form, in the equations of themodynamics, motion etc that govern the rocks behaviour. But qualia does not imply self-awareness (1), and there's no way you can communicate with the rock. Qualia also doesn't imply emotions (2), and if there is neither self-awareness nor emotions then I don't see why there need be any moral considerations.
As to determining the truth of Panpsychism and categorising which things have emotions, self awareness etc, I shall defer this problem to future superintelligences. Additionally, a CEV AI should devote a lot of resources to humans regardless of whether panpsychism is true, because most people don't believe in Peter Singer style altruism.
1 because (a) people who meditate for many years or take a large does of dissociative drugs can experience ego-death, where they stop conceptualising a self, but they still experience qualia. (b) most animals are not self-aware, yet intuitions and occam's razor tell me that they still experience qualia
2 some people experience emotional blunting, but while the world may seem grey emotionally, yet they still experience qualia. Additionally, squid do not have emotions, and again I believe they still have qualia.
EDIT: as well as lacking self-awareness and emotions, rocks also lack agency. The question of what to do with a human, who due to various incurable diseases, lacks self-awareness, emotions and agency is left as an excersize for the reader.
How do you know what a CEV AI should do?
How do you know that squids don't have emotions?
Define agency.
You could have at least stepped up to the challenge you left to the reader.
Panpsychism seems like a plausible theory of consciousness. It raises extreme challenges for establishing reasonable ethical criteria.
It seems to suggest that our ethics is very subjective: the "expanding circle" of Peter Singer would eventually (ideally) stretch to encompass all matter. But how are we to communicate with, e.g. rocks? Our ability to communicate with one another and our presumed ability to detect falsehood and empathize in a meaningful way allow us to ignore this challenge wrt other people.
One way to argue that this is not such a problem is to suggest that humans are simply very limited in our capacity as ethical beings, and that we are fundamentally limited in our perceptions of ethical truth to only be able to draw conclusions with any meaningful degree of certainty about other humans or animals (or maybe even life-forms, if you are optimistic).
But this is not very satisfying if we consider transhumanism. Are we to rely on AI to extrapolate our intuitions to the rest of matter? How do we know that our intuitions are correct (or do we even care? I do, personally...)? How can we tell if an AI is correctly extrapolating?