I'm not going to be able to adequately answer comments this long in the future, especially because I disagree with the bulk of their content. You're making a huge number of underlying assumptions you don't seem to be explicitly stating, and it seems you're not aware of those assumptions either.
However, if you really are one who doesn't experience hot empathy, then you aren't really allowed to be offended by the fact that I feel reduced hot empathy towards you, because that's just tit for tat. ;)
I think you're committing the typical mind fallacy here. It seems you have a lot of hot empathy, so because that is the most visible part of your altruistic cognition, you easily think it's the only one. Some of your thinking seems to be motivated by this.
See my comments in this thread if you're confused by what I say so that I don't have to reiterate myself. You said in a later paragraph you care about my preferences and I bet our preferences are pretty similar, despite our emotional life probably being quite different.
(self-diagnoses of sociopathy are insufficient evidence that someone actually doesn't care about other people).
Psychopathy and sociopathy are much wider concepts than nonempathy. Even these wider concepts don't imply sadism either. Be careful not confuse them, as that has potential to insult a lot of people.
It seems like humans typically only extend altruism towards things which reciprocate altruism in return.
Could be. Do you find this principle morally sound? Do you propose being altruistic only towards people who can reciprocate it to you? Can that be called altruism?
I have a dream, that one day agents will be judged not by the substrate of their code, but by the behavioral output of whatever algorithm they run.
That's fine if we have no methods that are more direct. If you knew what kind of computation suffering is, and you can directly find out if someone suffers by scanning their brain, why on earth would you not rather use that?
First of all, not doing so violates the anti-zombie principle
Insisting on visible behavioural output means you don't care about paralyzed people. I think insisting on visible output is the part that confuses your thinking the most.
So...if you want to define "suffering" to be referring to specific algorithms, I'm comfortable with that...but this discussion really isn't about suffering, is it? It's about morality.
You need to have terminal values to talk about morality, and as far as I'm concerned terminal values in human beings are in many situations, not all, determined by their affects, like suffering.
Bleh...yeah. It is bizarre. How about we don't call it "suffering" , and just focus on "bad thing that we want to avoid." for now.
Because the bad thing most people want to avoid is suffering, and you're butchering the concept.
I'm more trying to get at what is morally relevant about suffering, not defining suffering itself. Language is filled with fuzzy categories that dissolve under the application of rigor.
I've got no problem with your goal, but I'm sorry, you don't seem to be applying the rigor. From my POV you're taking suffering, taking everything that's important about it, throwing it in the trash can and inventing your own concept that has nothing to do with what people mean when they use the word. Why should I care about this concept you produced from thin air?
"I care about the preferences of all agents X who have this statement embedded in their algorithm".
All I can say about this is that whether some computation is a person doesn't affect my altruism towards them whatsoever.
I do care about whether a snake or a bee has the computational equivalent of suffering happening in their brains, because I know from personal experience that suffering sucks, and I want less of it in this universe. I might care about what a paper clipper feels, but that would be dwarfed in importance by everything else that it does.
Affects like suffering are not the only factor when I'm deciding where to extend my altruism either, since my resources are limited.
I think you're committing the typical mind fallacy here. It seems you have a lot of hot empathy, so because that is the most visible part of your altruistic cognition, you easily think it's the only one. Some of your thinking seems to be motivated by this.
Mind projection fallacy is when you confuse map with territory and preferences with facts. What I'm doing is assuming other humans are like me - a heuristic which does in fact generally work.
But even so, I did mention:
...I don't actually care about Hot Empathy either . What I care about are your prefere
I felt like this draft paper by Anders Sandberg was a well-thought-out essay on the morality of experiments on brain emulations. Is there anything you disagree with here, or think he should handle differently?
http://www.aleph.se/papers/Ethics%20of%20brain%20emulations%20draft.pdf