Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

AdeleneDawner comments on The Danger of Stories - LessWrong

9 Post author: Matt_Simpson 08 November 2009 02:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (103)

You are viewing a single comment's thread. Show more comments above.

Comment author: AdeleneDawner 09 November 2009 05:01:19PM 3 points [-]

PTSD or other long-term psychological (or physical) impairment, then - which may be sub-clinical or considered normal. An example: Punishment causes a psychological change that reduces the person's ability to do the thing that they were punished for. We don't (to the best of my knowledge) have a name for that change, but it observably happens, and when it does, the punishment has caused harm. (It may also be helpful, for example if the punished action would have reduced the person's ability to function in other ways. The two aren't always mutually exclusive. Compare it to charging someone money for a class - teaching is helpful, taking their money is harmful.)

Also, I do believe that there could be situations where someone is tortured and doesn't experience a long-term reduction in functionality, in which case, yes, the torture wasn't harmful. The generalization that torture is harmful is useful because those situations are rare, and because willingness to attempt to harm someone is likely to lead to harm, and should be addressed as such.

The most relevant point in the discussion of pain is right at the beginning - people who don't experience any pain tend to have very short or very difficult lives. That makes it obvious that being able to experience pain is useful to the experiencer, rather than net-harmful. So, even though some pain is observably harmful, some pain must be helpful enough to make up the difference. That doesn't jive with 'pain is intrinsically harmful', unless you're using a very different definition of the word, in which case I request that you clarify how you're defining it.

Comment author: Jack 09 November 2009 06:29:35PM 2 points [-]

Also, I do believe that there could be situations where someone is tortured and doesn't experience a long-term reduction in functionality, in which case, yes, the torture wasn't harmful.

Well, anyone else who thinks this is wrong feel free to modus tollens away the original definition. ...

I was hoping to make my point by way of counter example. Since you're not recognizing the counter example I have to go back through the whole definition and the context to see where we lost each other. But thats a mess to do because right now this is a semantic debate. To make it not one I need the cash value of you belief that something is harmful. Do you always try to avoid harm to yourself? Is something being harmful necessary for you to avoid it/avoid doing it to others? Is it sufficient? Does this just apply to you? All humans? AI? Animals? Plants? Thermostats? Futons? Is something other than help and harm at work in your decision making? You don't have to answer all of these, obviously, just give me an idea of what I should see if something is harmful so I can actually check to see if your definition works. Otherwise you can't be wrong.

Then we can see if "causing decreased functionality" leads to the right response in all the circumstances. For example, I think there are times where people want to limit their net functionality and are right to do so even and especially when they know what they're doing.

Comment author: AdeleneDawner 09 November 2009 07:24:55PM 0 points [-]

Do you always try to avoid harm to yourself?

No; if I can help someone else (or my future self) enough by harming myself or risking harm, I'll do so. Example: Giving a significant sum of money to someone in need, when I don't have an emergency fund myself.

Is something being harmful necessary for you to avoid it/avoid doing it to others?

No, there are other reasons that I avoid doing things, such as to avoid inconvenience or temporary pain or offending people.

Is it sufficient?

I use a modified version of the logic that I use to determine whether I should harm myself to decide whether it's worth it to harm others. I generally try to err on the side of avoiding harming others, because it's harder to estimate the effect that a given harm will have on their life than it is to estimate its effect on mine.

Does this just apply to you? All humans? AI? Animals? Plants? Thermostats? Futons?

My definition is meant to be general enough to cover all of those, but in each case the meaning of 'function' has to be considered. Humans get to determine for themselves what it means to function. AIs' functions are determined by their programmers (not necessarily intentionally). In practice, I consider animals on a case-by-case basis; as an omnivore, it'd be hypocritical of me to ignore that I consider the function of chickens to be 'become tasty meat', but I generally consider pets and wild animals to determine their own functions. (A common assigned function for pets, among those who do assign them functions, is 'provide companionship'. Some wild animals are assigned functions, too, like 'keep this ecosystem in balance' or 'allow me to signal that I care about the environment, by existing for me to protect'.) I lump plants in with inanimate and minimally-animate objects, whose functions are determined by the people owning them, and can be changed at any time - it's harmful to chop up a futon that was intended to be sat on, but chopping up an interestingly shaped pile of firewood with some fabric on it isn't harmful.

Is something other than help and harm at work in your decision making?

In a first-order sense, yes, but in each case that I can think of at the moment, the reason behind the other thing eventually reduces to reducing harm or increasing help. Avoiding temporary pain, for example, is a useful heuristic for avoiding harming my body. Habitually avoiding temporary inconveniences leaves more time for more useful things and helps generate a reputation of being someone with standards, which is useful in establishing the kind of social relationships that can help me. Avoiding offending people is also useful in maintaining helpful social relationships.

You don't have to answer all of these, obviously, just give me an idea of what I should see if something is harmful so I can actually check to see if your definition works. Otherwise you can't be wrong.

Comment author: Jack 10 November 2009 08:40:20AM 0 points [-]

In a first-order sense, yes, but in each case that I can think of at the moment, the reason behind the other thing eventually reduces to reducing harm or increasing help.

So If I am in extraordinary pain it would never be helpful/ not-harmful for me to kill myself or for you to assist me?

Also, where does fulfilling your function fit into this? Unless you function is just increasing functionality.

Finally, I guess you're comfortable with the fact that the function of different things is determined in totally different ways? Some things get to determine their own function while other things have people determine it for them? As far as I can tell, people determine the function of tools and this notion that people determine their own function, while true in a sense, is just Aristotelian natural law theory rearing its ugly head. It used to be that we have purposes because God created us and instilled us in one. But if there is no God it seems that the right response is to conclude that purpose, as it applied to humans, was a category error, not that we decide our own purpose.

Comment author: AdeleneDawner 10 November 2009 08:13:14PM *  2 points [-]

Reminder:

I haven't gotten around to deconstructing those terms yet, but off the top of my head:

This is beta-version-level thought. It isn't surprising that it still has a few rough spots or places where I haven't noticed that I need to explain one thing for another to make sense.


Also, where does fulfilling your function fit into this? Unless you function is just increasing functionality.

Function as I'm intending to talk about it isn't something you fulfill, it's an ability you have: The ability to achieve the goals you're interested in achieving. Those goals vary not just from person to person, but also with time, whether they're achieved or not. Also, people do have more than one goal at any given time.

I have used the word 'function' in the other sense, above, mistakenly. I'll be more careful.

So If I am in extraordinary pain it would never be helpful/ not-harmful for me to kill myself or for you to assist me?

There are two overlapping types of situations that are relevant; if either one of them is true, then it's helpful to assist the person in avoiding/removing the pain. One is that the person has 'avoid pain' as a relevant goal in the given instance, and helping achieve that goal doesn't interfere with other goals that the person considers more important. The other is that the pain is signaling harmful damage to the person's body. There are situations that don't come under either of those umbrellas - certain BDSM practices, where experiencing pain is the goal, for example, or situations where doing certain things evokes pain but not actual (relevant to the individual's goals) harm, and the only way to avoid the pain is to give up on accomplishing more important goals, which is common in certain disabilities and some activities like training for a sport or running a marathon.

Whether suicide would be considered helpful or harmful in a given situation is a function of the goals of the person considering the suicide. If you're in a lot of pain, have an 'avoid pain' goal that's very important, and don't have other strong goals or the pain (or underlying damage causing the pain) makes it impossible for you to achieve your other strong goals, the answer is fairly obvious: It's helpful. If your 'avoid pain' goal is less important, or you have other goals that you consider important and that the pain doesn't prevent you from achieving, or both, it's not so obvious. Another relevant factor is that pain can be adapted to, and new goals that the pain doesn't interfere with can be generated. I leave that kind of judgment call up to the individual, but tend to encourage them to think about adapting, and take the possibility into account before making a decision, mostly because people so often forget to take that into account. (Expected objection: Severe pain can't be adapted to. My response: I know someone who has. The post where she talks about that in particular is eluding me at the moment, but I'll continue looking if you're interested.)

If it weren't illegal or if there was a very low chance of getting caught, I'd be comfortable with helping someone commit suicide, if they'd thought the issue through well, or in some cases where the person would be unable to think it through. I know not everyone thinks about this in the same way that I do: 'If they've thought the issue through well' doesn't mean 'if they've fulfilled the criteria for me to consider the suicide non-harmful'. Inflicting my way of thinking on others has the potential to be harmful to them, so I don't.

Finally, I guess you're comfortable with the fact that the function of different things is determined in totally different ways? Some things get to determine their own function while other things have people determine it for them?

There's an underlying concept there that I failed to make clear. When it comes to accomplishing goals, it works best to consider an individual plus their possessions (including abstract things like knowledge or reputation or, to a degree, relationships) as a single unit. One goal of the unit 'me and my stuff' is to maintain a piece of territory that's safe and comfortable to myself and guests. My couch is functional (able to accomplish the goal) in that capacity, specifically regarding the subgoal of 'have comfortable places available for sitting'. My hands are similarly functional in that capacity, though obviously for different subgoals: I use them to manipulate other tools for cleaning and other maintenance tasks and to type the programs that I trade for the money I spend on rent, for example.

(This is based on the most coherent definition of 'ownership' that I've been able to come up with, and I'm aware that the definition is unusual; discussion is welcome.)

As far as I can tell, people determine the function of tools and this notion that people determine their own function, while true in a sense, is just Aristotelian natural law theory rearing its ugly head. It used to be that we have purposes because God created us and instilled us in one. But if there is no God it seems that the right response is to conclude that purpose, as it applied to humans, was a category error, not that we decide our own purpose.

I think I've already made it clear in this comment that this isn't the concept I'm working with. The closest I come to this concept is the observation that people (and animals, and possibly AI) have goals, and since those goals are changeable and tend to be temporary (with the possible exception of AIs' goals), they really are something entirely different. I also don't believe that there's any moral or objective correctness or incorrectness in the act of achieving, failing at, or discarding a goal.

Comment author: AdeleneDawner 10 November 2009 10:00:01PM 5 points [-]

(Expected objection: Severe pain can't be adapted to. My response: I know someone who has. The post where she talks about that in particular is eluding me at the moment, but I'll continue looking if you're interested.)

Found it, or a couple related things anyway. This is a post discussing the level of pain she's in, her level of adaptation, and people's reactions to those facts, and this is a post discussing her opinion on the matter of curing pain and disability. I remember there being another, clearer post about her pain level, but I still haven't found that, and I may be remembering a discussion I had with her personally rather than a blog post. She's also talked more than once about her views on the normalization of suicide, assisted suicide, and killing for 'compassionate' reasons for people with disabilities (people in ongoing severe pain are inferred to be included in the group in question), though she usually just calls it murder (her definition of murder includes convincing someone that their life is so worthless or burdensome to others that they should commit suicide) - browsing this category turns up several of the posts.

Comment author: Jack 10 November 2009 11:12:56PM 0 points [-]

This is beta-version-level thought. It isn't surprising that it still has a few rough spots or places where I haven't noticed that I need to explain one thing for another to make sense.

Sure. I don't mean to come on too forcefully.

Function as I'm intending to talk about it isn't something you fulfill, it's an ability you have: The ability to achieve the goals you're interested in achieving. Those goals vary not just from person to person, but also with time, whether they're achieved or not. Also, people do have more than one goal at any given time.

So help and harm aren't the only things in your decision-making, there are also goals. What is the relation between the two? Why can't the help-harm framework be subsumed under "goals"? This question is especially salient if "goals" is just going to be the box where you throw in everything relevant to decision making and ethics that doesn't fit with your concepts of help and harm.

Also, something that might make where I'm coming from clearer: when I was using "pain" before I meant it generally, not just in a physical sense. So I just read these examples about BDSM or needing pain to avoid early death as cases of one kind of pain leading another kind of pleasure of preventing another kind of pain. I'll use the word "suffering" in the future to make this clearer. This might make the claim that pain is inherently harmful seem more plausible to you.

When it comes to accomplishing goals, it works best to consider an individual plus their possessions (including abstract things like knowledge or reputation or, to a degree, relationships) as a single unit. [...] (This is based on the most coherent definition of 'ownership' that I've been able to come up with, and I'm aware that the definition is unusual; discussion is welcome.)

So all my belongings share my goals? It seems pretty bizarre to attribute goals to any inanimate object much less give them the same goals their owner has. It also would be really strange if the fundamentals of decision-makign involved a particular and contingent socio-political construction (i.e. ownership and property). It also seems to me that possessing a reputation and possessing a car are two totally different things and that the fact that we use the same verb "have" to refer to someone's reputation or knowledge is almost certainly a linguistic accident (maybe someone who speaks more languages can confirm this). So yes, I'd like to read a development of this notion of ownership if you want to provide one.

I also don't believe that there's any moral or objective correctness or incorrectness in the act of achieving, failing at, or discarding a goal.

A world in which 90% of goals were achieved wouldn't be better than a world in which only 10% were achieved? Is a world where there is greater functionality better than a world in which there is less functionality? We might have step back further and see if we even agree what morality is.

Comment author: AdeleneDawner 11 November 2009 12:39:51AM 0 points [-]

So help and harm aren't the only things in your decision-making, there are also goals. What is the relation between the two?

'Helpful' and 'harmful' are words that describe the effect of an action or circumstance on a person(+their stuff)'s ability to achieve a goal.

Why can't the help-harm framework be subsumed under "goals"?

This question doesn't make sense to me - 'help and harm' and 'goals' are two very different kinds of things, and I don't see how they could be lumped together into one concept.

This question is especially salient if "goals" is just going to be the box where you throw in everything relevant to decision making and ethics that doesn't fit with your concepts of help and harm.

This area of thought is complex, and my way of approaching it does view a lot of that complexity as goal-related, but the point of that isn't to handwave the complexity, it's to put it where I can see it and deal with it.

Also, something that might make where I'm coming from clearer: when I was using "pain" before I meant it generally, not just in a physical sense. So I just read these examples about BDSM or needing pain to avoid early death as cases of one kind of pain leading another kind of pleasure or preventing another kind of pain. I'll use the word "suffering" in the future to make this clearer. This might make the claim that pain is inherently harmful seem more plausible to you.

Suffering is experiencing pain that the experiencer wanted to avoid - pain that's addressed by their personal 'avoid pain' goal. We don't disagree about those instances of experienced pain being harmful. Not all experiences of pain are suffering, though.

Also, people with CIPA are unable to experience the qualia of pain just as people who are blind are unable to experience the qualia of color. If you're considering the injuries they sustain as a result painful, we're not defining the word in the same way.

So all my belongings share my goals? It seems pretty bizarre to attribute goals to any inanimate object much less give them the same goals their owner has.

It's only bizarre if you're considering the owned thing and the owner as separate entities. Considering them the same for some purposes (basically anything involving the concept of ownership as our society uses it) is the only way of looking at it that I've found that adds back up to normality.

Does your body share your goals?

It also would be really strange if the fundamentals of decision-making involved a particular and contingent socio-political construction (i.e. ownership and property).

It's a function of discussing owned things, not of discussing decision-making. Un-owned things that aren't alive don't have goals. (Or they may in a few cases retain the relevant goal of the person who last owned them - a machine will generally keep working when its owner dies or looses it or disowns it.)

It also seems to me that possessing a reputation and possessing a car are two totally different things and that the fact that we use the same verb "have" to refer to someone's reputation or knowledge is almost certainly a linguistic accident (maybe someone who speaks more languages can confirm this).

The difference between those concepts may be relevant elsewhere, but I don't think it's relevant in this case.

So yes, I'd like to read a development of this notion of ownership if you want to provide one.

I'll put it on my to-do list.

A world in which 90% of goals were achieved wouldn't be better than a world in which only 10% were achieved? Is a world where there is greater functionality better than a world in which there is less functionality? We might have step back further and see if we even agree what morality is.

It depends what you mean by 'better', and for the most part I choose not to define or use the word except in situations with sufficient context to use it in phrases like 'better for' or 'better at'.