Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Ethics of Brain Emulation

1 Post author: summerstay 04 December 2013 07:19PM

I felt like this draft paper by Anders Sandberg was a well-thought-out essay on the morality of experiments on brain emulations. Is there anything you disagree with here, or think he should handle differently?

http://www.aleph.se/papers/Ethics%20of%20brain%20emulations%20draft.pdf

Comments (31)

Comment author: DanielLC 06 December 2013 04:23:43AM 3 points [-]

I haven't read much of it, but the beginning seems to be saying that the animal research part is unethical. I'm not saying it isn't, but that's not what we should be worried about. We're using animals as food! We are raising them, in rather unsavory conditions, by the billions. If you let that stand, but object to a little animal research, your priorities are way out of whack.

Comment author: hyporational 07 December 2013 05:41:45AM *  1 point [-]

Also, saying something is evil doesn't mean it's not a necessary evil.

I think animal research has more potential to make the animals suffer than growing them for food, if both try to minimize suffering and other things are equal. Of course, the sheer difference in scope means more suffering will happen in food industry by incompetence than in research by intention.

Comment author: DanielLC 07 December 2013 05:46:42AM 2 points [-]

The problem is that food scales. If you do animal research, you're causing distress to the animals, but it's constant. It doesn't matter if there's one person or billions. You only have to do the research once. Food isn't like that. If you want to feed a billion people, it requires a billion times more animal cruelty than feeding one person.

Comment author: hyporational 07 December 2013 05:52:09AM *  0 points [-]

I guess what I tried to say is that cruelty isn't necessary for growing animals for food, but it is necessary for certain kinds of research.

Comment author: hyporational 07 December 2013 05:50:16AM *  0 points [-]

I edited the comment before you answered. I don't think we really disagree here. Just wanted to point out why the paper might focus more on animal research than food industry.

Research scales too, just not as much.

Comment author: DanArmak 07 December 2013 12:11:45PM 0 points [-]

The vast majority of animals being raised for food aren't in environments that try to minimize suffering even slightly.

Comment author: hyporational 07 December 2013 12:17:06PM 0 points [-]

See my other two comments.

Comment author: ThrustVectoring 05 December 2013 03:03:31AM 3 points [-]

I think a one-sentence summary, a question, and a link to a draft paper is something that belongs in an open thread and not as it's own post in discussion.

Comment author: somervta 05 December 2013 08:20:45AM 5 points [-]

I think the traditional [Link] tag should be added.

Comment author: joaolkf 05 December 2013 03:15:35AM *  5 points [-]

It would be a weird criteria indeed to hold that about this post, while often people constantly post links to videos/papers while solely copy-pasting the summary. I don't understand why you would say this. I definitely wouldn't have found this paper if it were posted on open thread, which I never read since I do not expect to find anything relevant to my research, other than some social interaction motivation. I do plan on commenting it further later.

Comment author: shminux 04 December 2013 10:15:31PM -1 points [-]

It's an interesting review of the subject matter, but I have trouble taking seriously a paper discussing suffering of software without ever defining the term suffering and how this definition applies to software.

Comment author: Ishaan 06 December 2013 01:21:01AM *  4 points [-]

From the eliminative materialist perspective we should hence be cautious about ascribing or not ascribing suffering to software, since we do not (yet) have a good understanding of what suffering is (or rather, what the actual underlying component that is morally relevant, is).

They acknowledged it.

Comment author: hyporational 05 December 2013 06:15:41AM *  1 point [-]

What kind of a definition would satisfy you?

IASP defines pain as "An unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage."

I'm not sure that definition can be understood without having experienced pain, or other unpleasant sensations. So if we can't even have an objective scientific definition of pain, why wouldn't we be satisfied with "everyone who has experienced suffering knows what it is, and that's as good a definition we can get with modern science"?

Comment author: Ishaan 06 December 2013 01:20:21AM *  1 point [-]

Because then the dualists will win! /s

I guess I take it as a general principle of epistemology that things which cannot be defined rigorously in some language, without contradiction, don't exist?

In any case, I don't think coming up with a definition of "suffering" is that hard. I rather like my definition. I came up with it while trying to settle ethical questions concerning non-human animals.


Taboo suffering. What's the bad thing that we want to avoid?

I don't like it when other humans experience things which are extremely contrary to their preferences (I have altruism).

Humans are not the only class of things for which I experience altruism. Let's define a class of things towards which I experience altruism as "person-like beings".

a Being is 1) a type of object 2) which manipulates its surroundings in a pattern 3) which suggests that it has certain goals. The defining property of a Being is intelligence. A Paperclipper is a Being, but not a person...I feel no altruism towards the paperclipper, because while it's intelligent it is not a Person to me.

When I probe my moral intuition, the Personish-ness of a Being seems to be most strongly related to its degree of preference for all other person-like beings to have their preferences fulfilled. I think this is the only necessary condition for me to feel altruism towards an object, but I'm uncertain. So the defining quality of a Person, then, is Empathy for others, with an intelligence multiplier.

In any case, the Bad Thing we want to prevent is the existence of Person-Beings who are not having their preferences fulfilled.

In other words, "suffering" is when an intelligent and empathetic object does not get its preferences fulfilled.

(Also, yes, I bite the bullet - humans with less empathy are less person-like within the semantic framework constructed above. I think most humans are roughly in the same spectrum of intelligence for moral purposes, but in edge cases i have to bite that bullet as well - though I'm a bit less comfortable with that. The "empathy" weighting seems much more important that the "intelligence" weighting.)


...Admittedly this is very rough and 'm sure you can poke holes in that (for example, the fact that the definition of person-like is self-referential could be exploited), but as per my own moral intuitions it seems roughly accurate. I think that if unstructured idle thought produces something that seems close to correct then, with sufficient thinking and modification we could come up with something that is correct.

Comment author: hyporational 06 December 2013 06:59:30AM *  2 points [-]

Upvoted you back to zero. Let me try to poke a few holes, in good will.

What's the bad thing that we want to avoid?

If you just look at the behavioural instead of the experiental aspect of suffering, this already eliminates anything that could be normally understood by the word.

I don't like it when other humans experience things which are extremely contrary to their preferences

Preferences do not depend on the existence suffering, although suffering seems to depend on the existence preferences.

I feel no altruism towards the paperclipper, because while it's intelligent it is not a Person to me.

Taboo person.

In other words, "suffering" is when an intelligent and empathetic object does not get its preferences fulfilled.

Are you saying that nonempathetic human beings can't suffer? I find that claim bizarre.

humans with less empathy are less person-like within the semantic framework constructed above.

I'm a human being with less empathy, and I'm ready to protect my preferences, so be careful ;)

I think that if unstructured idle thought produces something that seems close to correct then, with sufficient thinking and modification we could come up with something that is correct.

With the current state of science, I personally don't think we need to define suffering any more than we need to define colours. Once we know what happens in the brain when a person reports they are suffering, we know what suffering is and how to measure it.

Comment author: Ishaan 06 December 2013 08:18:59PM *  -1 points [-]

I'm a human being with less empathy, and I'm ready to protect my preferences, so be careful ;)

Just to clarify, I'm talking about the "I don't care about the suffering of other humans" / sociopathy sort of no-empathy, not the "I have trouble interpreting facial expressions" / autism sort of no-empathy. It's unfortunate that we use the same word for those. Some psychologists use "Hot empathy (feeling)" and "Cold empathy (perception)" to differentiate.

And, since I can't look at your brain directly, I wouldn't actually feel reduced altruism towards you unless you actually did an action which demonstrated callous disregard for other people (self-diagnoses of sociopathy are insufficient evidence that someone actually doesn't care about other people).

However, if you really are one who doesn't experience hot empathy, then you aren't really allowed to be offended by the fact that I feel reduced hot empathy towards you, because that's just tit for tat. ;)

Although, I don't actually care about Hot Empathy either . What I care about are your preferences - do you care about others as a (non-instrumental) value? Hot Empathy is where most humans derive their altruistic preferences from, but if you derive altruistic preferences via some other route then that works for me.

Are you saying that nonempathetic human beings can't suffer? I find that claim bizarre.

Bleh...yeah. It is bizarre. How about we don't call it "suffering" , and just focus on "bad thing that we want to avoid." for now.

It seems like humans typically only extend altruism towards things which reciprocate altruism in return. Why are humans more bothered by the suffering of dogs than they are by the suffering of pigs, though the two animals are of comparable intellect? Other than mere familiarity, it's because the former reciprocates altruism. It's harder to slaughter something that shows affection towards you.

If you just look at the behavioural instead of the experiental aspect of suffering, this already eliminates anything that could be normally understood by the word.

I have a dream, that one day agents will be judged not by the substrate of their code, but by the behavioral output of whatever algorithm they run.

First of all, not doing so violates the anti-zombie principle, and second of all, if we interact with aliens or AI, I want us to be friends, and I want AI we design to consider them as friends too. So...if you want to define "suffering" to be referring to specific algorithms, I'm comfortable with that...but this discussion really isn't about suffering, is it? It's about morality. And morality shouldn't care what sort of substrate your algorithm runs on, nor should it really care what specific algorithm you use except with regard to its output. (Though I can think of some fun edge cases here if you want to talk about that)

Like the paper said,

we do not (yet) have a good understanding of what suffering is (or rather, what the actual underlying component that is morally relevant, is)

I'm more trying to get at what is morally relevant about suffering, not defining suffering itself. Language is filled with fuzzy categories that dissolve under the application of rigor.

Taboo person.

Okay, so in this context, being classed as a Person means that in addition to intelligence, the following thing is approximately true:

"I care about the preferences of all agents X who have this statement embedded in their algorithm".

(Yes, this is uncomfortable for me too. I haven't worked out a non-self-referential version.)

So, if snakes don't even help each other, they aren't people at all save for their tiny little spark of intelligence. If bees help each other sometimes, but never other species, they aren't people at all because the altruism is only directed towards the colony.

When mice altruistically free other rodents from cages, it is a spark personhood...but it's limited because the mouse will only do this for species who display affective cues which it can understand. It doesn't have the cognitive capacity to understand emotions in the abstract and apply that knowledge to, say, altruism towards a human or bird. So its not very person-ish...but we certainly wouldn't torture it.

Dolphins, dogs, elephants, apes, etc...show cross species altruism and a high degree of intelligence. They are very person-ish. We should be really nice to them, in proportion to how person-ish they are.

A Paperclipper is sort of like a hyper-intelligent bee or snake. We don't really care how it feels. A Friendly SAI, on the other hand, is even more person-ish than a human. We'd never want to violate the preferences of a Friendly SAI. (this is rather tautological, of course)

Part of the problem is that, in order to explain my idea, I took certain words and re-defined them away from their common usage to suite my purposes. I don't know how to say this using the words we have now though. And the other problem is that it's sloppy. I haven't thought through this nearly enough.

But the general direction feels both intuitively comfy when I apply it to animals (that's the closest thing to a truly alien mind that we have right now) and comes with the bonus of being somewhat pragmatic with its tit-for-tat attitude towards what sorts of beings we should be friendly towards.

Comment author: hyporational 07 December 2013 04:21:15AM *  0 points [-]

I'm not going to be able to adequately answer comments this long in the future, especially because I disagree with the bulk of their content. You're making a huge number of underlying assumptions you don't seem to be explicitly stating, and it seems you're not aware of those assumptions either.

However, if you really are one who doesn't experience hot empathy, then you aren't really allowed to be offended by the fact that I feel reduced hot empathy towards you, because that's just tit for tat. ;)

I think you're committing the typical mind fallacy here. It seems you have a lot of hot empathy, so because that is the most visible part of your altruistic cognition, you easily think it's the only one. Some of your thinking seems to be motivated by this.

See my comments in this thread if you're confused by what I say so that I don't have to reiterate myself. You said in a later paragraph you care about my preferences and I bet our preferences are pretty similar, despite our emotional life probably being quite different.

(self-diagnoses of sociopathy are insufficient evidence that someone actually doesn't care about other people).

Psychopathy and sociopathy are much wider concepts than nonempathy. Even these wider concepts don't imply sadism either. Be careful not confuse them, as that has potential to insult a lot of people.

It seems like humans typically only extend altruism towards things which reciprocate altruism in return.

Could be. Do you find this principle morally sound? Do you propose being altruistic only towards people who can reciprocate it to you? Can that be called altruism?

I have a dream, that one day agents will be judged not by the substrate of their code, but by the behavioral output of whatever algorithm they run.

That's fine if we have no methods that are more direct. If you knew what kind of computation suffering is, and you can directly find out if someone suffers by scanning their brain, why on earth would you not rather use that?

First of all, not doing so violates the anti-zombie principle

Insisting on visible behavioural output means you don't care about paralyzed people. I think insisting on visible output is the part that confuses your thinking the most.

So...if you want to define "suffering" to be referring to specific algorithms, I'm comfortable with that...but this discussion really isn't about suffering, is it? It's about morality.

You need to have terminal values to talk about morality, and as far as I'm concerned terminal values in human beings are in many situations, not all, determined by their affects, like suffering.

Bleh...yeah. It is bizarre. How about we don't call it "suffering" , and just focus on "bad thing that we want to avoid." for now.

Because the bad thing most people want to avoid is suffering, and you're butchering the concept.

I'm more trying to get at what is morally relevant about suffering, not defining suffering itself. Language is filled with fuzzy categories that dissolve under the application of rigor.

I've got no problem with your goal, but I'm sorry, you don't seem to be applying the rigor. From my POV you're taking suffering, taking everything that's important about it, throwing it in the trash can and inventing your own concept that has nothing to do with what people mean when they use the word. Why should I care about this concept you produced from thin air?

"I care about the preferences of all agents X who have this statement embedded in their algorithm".

All I can say about this is that whether some computation is a person doesn't affect my altruism towards them whatsoever.

I do care about whether a snake or a bee has the computational equivalent of suffering happening in their brains, because I know from personal experience that suffering sucks, and I want less of it in this universe. I might care about what a paper clipper feels, but that would be dwarfed in importance by everything else that it does.

Affects like suffering are not the only factor when I'm deciding where to extend my altruism either, since my resources are limited.

Comment author: Ishaan 07 December 2013 08:10:12PM *  -2 points [-]

I think you're committing the typical mind fallacy here. It seems you have a lot of hot empathy, so because that is the most visible part of your altruistic cognition, you easily think it's the only one. Some of your thinking seems to be motivated by this.

Mind projection fallacy is when you confuse map with territory and preferences with facts. What I'm doing is assuming other humans are like me - a heuristic which does in fact generally work.

But even so, I did mention:

I don't actually care about Hot Empathy either . What I care about are your preferences - do you care about others as a (non-instrumental) value? Hot Empathy is where most humans derive their altruistic preferences from, but if you derive altruistic preferences via some other route then that works for me.

Does that amelieorate the criticism?

Even these wider concepts don't imply sadism either. Be careful not confuse them, as that has potential to insult a lot of people.

Does that mean you are offended? My apologies if so, I should have been more precise with langauge. However, I'm not sure why you think i confused sociopathy (lack of guilt, sympathetic pain) with sadism (pleasure via pain of others). Those two are almost opposites.

Insisting on visible behavioural output means you don't care about paralyzed people.

Of course not. You still have to use the computation, but morally speaking your interested in the outputs of the computation. In the case of the paralized person, you look at their brain, see what their outputs would be if they were in a different situation, and act accordingly.

The reason we can't just define suffering as a specific computation present in the brain is because when we are faced with other minds who use different computations to arrive at a roughly same output per input, we won't recognize them as suffering...unless we define suffering in relation to intput-output in the first place.

For example, most humans compute altruism via interactions between the amygdala and the vmPFC. Now, if someone doesn't compute altruism that way, but still exhibits altruistic behavior...then isn't it exactly the same thing? Weren't you disturbed when you thought that I was presuming to judge a person based on their internal states rather than their behavior previously in this conversation?

We obviously still look at the computation, but the reason we are looking is to figure out what it wishes to output in response to various inputs. That's what a computation is...a bridge between inputs into outputs.

I'm not sure if I'm explaining this correctly...a computation can't be intrinsically suffering or intrinsically pleasure, and claiming that it is commits some sort of essentialism which doesn't have a name yet...computational essentialism? You could take the exact same computation that represents suffering in one creature and re-purpose it into a different purpose entirely by changing the other computations with which it interacts. You can't just point to some computation and say, "this is Suffering, no matter what the surrounding context is".

I'm sorry, you don't seem to be applying the rigor. From my POV you're taking suffering, taking everything that's important about it, throwing it in the trash can and inventing your own concept that has nothing to do with what people mean when they use the word. Why should I care about this concept you produced from thin air?

Acknowledged. Like I said:

Part of the problem is that, in order to explain my idea, I took certain words and re-defined them away from their common usage to suite my purposes. I don't know how to say this using the words we have now though. And the other problem is that it's sloppy. I haven't thought through this nearly enough.

But your experiential definition of suffering is, by definition, inaccessable. If you define suffering that way, then the word will dissolve later on, much like words like "free will" tend to either dissolve or change definition so drastically that it scarcely seems like the same thing. The definition needs to change because the original definiton doesn't make sense. Qualia only applies to you, not to others.

I know from personal experience that suffering sucks, and I want less of it in this universe.

(by the way, this is pretty much the definition of the amygdala-vmPFC brand of "empathy" so I'm not sure why you refer to yourself as "low empathy". Or did you think that by "empathy" I was referring to mere mirroring the affective states of those around you, like how people cry at movies or something?)

comments this long

Can't be helped I'm afraid - this is one of those situations where brevity would take more effort. Not to worry, I don't feel offended if people don't reply to my comments, if that's why you felt the need to mention that you might not be able to reply!

Comment author: hyporational 08 December 2013 04:07:28AM *  2 points [-]

I guess I'll just be brief myself then.

Mind projection fallacy

Typical mind fallacy.

I'm not sure why you think i confused sociopathy (lack of guilt, sympathetic pain) with sadism (pleasure via pain of others).

I was more concerned about the nonempathy-psychopathy confusion. I'm not offended, but other people will be.

most humans compute altruism via interactions between the amygdala and the vmPFC

You don't know that, but more importantly naming brain regions doesn't explain anything. It's not necessary to bring real brains to the discussion.

referring to mere mirroring the affective states

Perhaps not mere, but that's how people use the word.

Qualia only applies to you, not to others.

Only if you're a solipsist. When people claim to have qualia, this is evidence they have qualia, because I have qualia, and they have brains similar to mine.

If we can make a high resolution record of what happens in their brain when they report qualia, we can look at what kind of computation those qualia are, and therefore determine if other agents have them too.

Comment author: Ishaan 08 December 2013 07:23:03AM *  0 points [-]

If we can make a high resolution record of what happens in their brain when they report qualia, we can look at what kind of computation those qualia are, and therefore determine if other agents have them too.

I'm confused...you seem to be suggesting that we use behavioral output to determine which parts of the brain are responsible for qualia, which you say should define morality... didn't you just tell me that I shouldn't use behavioral output to define my morality?

If we did it the way you said, and looked at the brain to see what happened when people reported perceiving things, we'd find out some cool things about human perception. However, there's no guarantee that other minds will use the computation. That's why I'm emphasizing that it's important to focus on the input-output functions of the algorithm, rather than the content of the algorithm itself. (Again, this does not mean we ignore the algorithm altogether - it means that we look at the algorithm with respect to what it would output for a given input - so we still care about paralyzed people, brains in vats, etc...since we can make guesses as to what they would output given minor changes to the situation.).

(Not to mention, there is a cascade of things happening from the moment your eyes perceive red to the moment your mouth outputs "Yeah, that's red" and looking at an actual brain will tell you nothing about which part of the computation gets the "qualia" designation. At best, you'll find some central hubs which handle information from many parts. Qualia, like free will, is a philosophical question - all the neuroscience knowledge in the world won't help answer it. Neuroscience might help eliminate some obviously wrong hypotheses, as it did with free will, but fundamentally this is a question that can and should be settled without neuroscience. )

Comment author: hyporational 08 December 2013 08:10:05AM *  0 points [-]

didn't you just tell me that I shouldn't use behavioral output to define my morality?

There's probably a lot of misunderstanding going on between us. I thought you meant you always need the output. In my interpretation you only need the output once for a particular qualia in the optimal situation. After that, you can just start scanning brains or programs for similar computations. How much output we need, if any, depends on at what stage of understanding we are.

However, there's no guarantee that other minds will use the computation.

True. However, if the reporting of qualia corresponds to certain patterns of brain activity, and that brain activity can be expressed mathematically, then we have a computation and we can think about other ways the computation could be performed. We might even be able to test different forms of the computation on EMs, and see what they report.

"Yeah, that's red" and looking at an actual brain will tell you nothing about which part of the computation gets the "qualia" designation.

This is incorrect, because there are temporal differences in brain activity. Light on your retina doesn't instantly transfer information to all parts of your brain responsible for visual processing. Also, there's no theoretical limitation on temporarily disabling certain brain areas or even single neurons, and examining how that corresponds to reporting of qualia.

Qualia, like free will, is a philosophical question - all the neuroscience knowledge in the world won't help answer it.

You should think about this further. How much would you be willing to bet that unconscious people experience qualia? How about rocks?

Comment author: army1987 08 December 2013 09:40:31AM 1 point [-]

Mind projection fallacy is when you confuse map with territory and preferences with facts. What I'm doing is assuming other humans are like me - a heuristic which does in fact generally work.

He said “typical mind fallacy”, not “mind projection fallacy”.

Comment author: Ishaan 08 December 2013 11:32:29PM *  -1 points [-]

oops. thanks!

Comment author: shminux 05 December 2013 08:14:47AM *  0 points [-]

There is a big difference between pain and suffering, though there is certainly some overlap. Suffering is the important one to define.

why wouldn't we be satisfied with "everyone who has experienced suffering knows what it is, and that's as good a definition we can get with modern science"

Because this "definition" does not help us figure out whether low-complexity WBEs suffer the same way humans do.

Comment author: hyporational 05 December 2013 09:23:00AM *  0 points [-]

You didn't answer my question. My point was that pain is simpler than suffering, and even scientists who study it can't objectively define it.

Because this "definition" does not help us figure out whether low-complexity WBEs suffer the same way humans do.

Are you suggesting we shouldn't even talk about their potential suffering then? On the same grounds we shouldn't talk about animal suffering either. That human beings suffer is evidence for low-complexity WBEs and animals being capable of that too.

By the time we can make low-complexity WBEs we'll probably have some understanding of what suffering computationally is, but it might be too late start philosophizing about it then.

Comment author: shminux 05 December 2013 04:04:32PM 0 points [-]

You didn't answer my question. My point was that pain is simpler than suffering, and even scientists who study it can't objectively define it.

First, the "objective" part of pain is known as nociception and can likely be studied in real or simulated organisms. The subjective part of pain need not be figured out separately from other qualia, like perception of color red.

Second, not all pain is suffering and not all suffering is pain, so figuring out the quale of suffering is separate from studying pain.

Are you suggesting we shouldn't even talk about their potential suffering then?

I think we have to work on formalizing qualia in general before we can make progress in understanding "computational suffering" specifically.

it might be too late start philosophizing about it then

I find philosophizing without the goal of separating a solvable chunk of a problem at hand a futile undertaking and a waste of time. The linked paper does a poor job identifying solvable problems.

Comment author: hyporational 05 December 2013 04:34:22PM *  0 points [-]

You were so busy refuting me you still didn't answer this question: what kind of a definition of suffering would satisfy you? So that people could talk about it without it being a waste of time, y'know.

First, the "objective" part of pain is known as nociception and can likely be studied in real or simulated organisms

In the future? Yes. Right now? No. We have no idea what kind of computation happens in the brain when someone experiences pain. Just because it has a name doesn't mean we have clue.

Second, not all pain is suffering and not all suffering is pain, so figuring out the quale of suffering is separate from studying pain.

I agree. Do you agree that pain is simpler than suffering and therefore the easier problem and more likely to be solved first?

I think we have to work on formalizing qualia in general before we can make progress in understanding "computational suffering" specifically.

I know I can suffer. If a simple WBE is made from my brain it inherits similarities to my brain and this is evidence it can suffer, the same way a complex mammalian brain has similarities to my brain and this is evidence it can suffer. Do you find these ideas objectionable? What do you mean by formalizing qualia?

I find philosophizing without the goal of separating a solvable chunk of a problem at hand a futile undertaking and a waste of time. The linked paper does a poor job identifying solvable problems.

Could be so. I'm not defending the paper, and I suggest you shouldn't assume everyone who reads your comment about it read it.

Comment author: shminux 05 December 2013 05:52:45PM -3 points [-]

This exchange does not seem to be going anywhere, so I'll just leave my final comments before disengaging, feel free to do likewise.

  • The paper draft is an interesting and comprehensive survey of views on em suffering and related (meta)ethics

  • It does not do a good job defining its subject matter and thus does not advance the field of em ethics

One potential avenue of progress in em ethics and "em rights" is to define suffering in an externally measurable way for various levels of em complexity and architecture.

Comment author: joaolkf 05 December 2013 03:04:57AM 1 point [-]

Never came by this draft. Is it new? (Though he has been working with it for quite some time..) I will take a look at it. But beforehand, my general view on simulations/emulations, is that even solely non-agency statistical simulations of an agent's behaviour, if precise enough, would contain what matters on suffering/pleasure. Memories feelings, thoughts and so on would be all shattered throughout many, many variables, but the correlations which would have to hold between all of these might still guarantee there would still be a (perhaps sentient) agent there.

Comment author: summerstay 05 December 2013 02:27:33PM 1 point [-]

I found the draft via this post from the end of June 2013.