Comment author: Peterdjones 17 December 2012 04:37:49PM 1 point [-]

"some other M" will still count as morality for many purposes, because self-serving ideas ("be loyal to the Geniralissimo", "obey your husband") are transmitted thorugh the same memetic channels are genuine morality. Morality is already blurred with disgust reactions and tribal shibboleths.

Comment author: PeterisP 19 December 2012 11:57:49AM 0 points [-]

What is the difference between "self-serving ideas" as you describe, "tribal shibboleths" and "true morality" ?

What if "Peterdjones-true-morality" is "PeterisP-tribal-shibboleth", and "Peterdjones-tribal-shibboleth" is "PeterisP-true-morality" ?

Comment author: RichardKennaway 11 December 2012 12:50:51PM 2 points [-]

Having settled the meta-ethics, will you have anything to say about the ethics? Concrete theorems, with proofs, about how we should live?

Comment author: PeterisP 19 December 2012 11:52:04AM 2 points [-]

I'm afraid that any nontrivial metaethics cannot result in concrete universal ethics - that the context would still be individual and the resulting "how RichardKennaway should live" ethics wouldn't exactly equal "how PeterisP should live".

The difference would hopefully be much smaller than the difference between "how RichardKennaway should live RichardKennaway-justly" and "How Clippy should maximize paperclips", but still.

Comment author: MugaSofer 27 November 2012 09:00:48PM 0 points [-]

Consider: if you attach higher utility to your child's life than mine, then even if my child has a higher chance of survival you will choose your child and leave mine to die.

Comment author: PeterisP 27 November 2012 09:31:38PM *  0 points [-]

Another situation that has some parallels and may be relevant to the discussion.

Helping starving kids is Good - that's well understood. However, my upbringing and current gut feeling says that this is not unconditional. In particular, feeding starving kids is Good if you can afford it; but feeding other starving kids if that causes your own kids to starve is not good, and would be considered evil and socially unacceptable. i.e., that goodness of resource redistribution should depend on resource scarcity; and that hurting your in-group is forbidden even with good intentions.

It may be caused by the fact that I'm partially brought up by people that actually experienced starvation and have had their relatives starve to death (WW2 aftermath and all that), but I'd guess that their opinion is more fact-based than mine and that they definitely had put more thought into it than I have, so until/if I analyze it more, I probably should accept that prior.

Comment author: MugaSofer 27 November 2012 09:00:48PM 0 points [-]

Consider: if you attach higher utility to your child's life than mine, then even if my child has a higher chance of survival you will choose your child and leave mine to die.

Comment author: PeterisP 27 November 2012 09:18:56PM *  0 points [-]

That is so - though it depends on the actual chances; "much higher chance of survival" is different than "higher chance of survival".

But my point is that:

a) I might [currently thinking] rationally desire that all of my in-group would adopt such a belief mode - I would have higher chances of survival if those close to me prefer me to a random stranger. And "belief-sets that we want our neighbors to have" are correlated with what we define as "good".

b) As far as I understand, homo sapiens do generally actually have such an attitude - evolutionary psychology research and actual observations when mothers/caretakers have had to choose kids in fires/etc.

c) Duty may be a relevant factor/emotion. Even if the values were perfectly identical (say, the kids involved would be twins of a third party), if one was entrusted to me or I had casually accepted to watch him, I'd be strongly compelled to save that one first, even if the chances of survival would (to an extent) suggest otherwise. And for my own kids, naturally, I have a duty to take care of them unlike 99.999% other kids - even if I wouldn't love them, I'd still have that duty.

Comment author: MugaSofer 27 November 2012 05:25:41PM 0 points [-]

... oh.

That seems ... kind of evil, to be honest.

Comment author: PeterisP 27 November 2012 08:51:10PM 0 points [-]

OK, then I feel confused.

Regarding " if I have to choose wether to save my child or your child, the unbiased rational choice is to save my child, as the utility (to me) of this action is far greater" - I was under impression that this would be a common trait shared by [nearly] all homo sapiens. Is it not so and is generally considered sociopathic/evil ?

Comment author: MugaSofer 27 November 2012 12:32:22AM 0 points [-]

I would value the suffering of my child as more important than the suffering of your child. And vice versa.

To be clear, you are arguing that this is a bias to be overcome, yes?

I've given it some thought but I can't imagine a way how to get to an order of magnitude estimate that would feel reasonable to me.

Scope insensitivity?

Comment author: PeterisP 27 November 2012 12:11:05PM *  0 points [-]

No, I'm not arguing that this is a bias to overcome - if I have to choose wether to save my child or your child, the unbiased rational choice is to save my child, as the utility (to me) of this action is far greater.

I'm arguing that this is a strong counterexample to the assumption that all entities may be treated as equals in calculating "value of entity_X's suffering to me". They are clearly not equal, they differ by order(s) of magnitude.

"general value of entity_X's suffering" is a different, not identical measurement - but when making my decisions (such as the original discussion on what charities would be the most rational [for me] to support) I don't want to use the general values, but the values as they apply to me.

Comment author: MTGandP 11 November 2012 03:53:50AM 3 points [-]

I think most people give way too small a multiplier to the weight of animal suffering. A non-human animal may not be able to suffer in all the same ways that a human can, but it is still sufficiently conscious such that its experiences in a factory farm are probably comparable to what a human's experiences would be in the same situation.

Comment author: PeterisP 26 November 2012 11:33:35PM 0 points [-]

What should be objective grounds for such a multiplier? Not all suffering is valued equally. Excluding self-suffering (which is so much subjectively different) from the discussion, I would value the suffering of my child as more important than the suffering of your child. And vice versa.

So, for any valuation that would make sense to me (so that I would actually use that method to make decisions), there should be some difference between multipliers for various beings - if the average homo sapiens would be evaluated with a coefficient of 1, then some people (like your close relatives or friends) would be >1, and some would be <1. Animals (to me) would clearly be <1 as illustrated by a simple dilemma - if I had to choose to kill a cow to save a random man, or to kill a random man to save a cow, I'd favor the man in all cases without much hesitation.

So an important question is, what should be a reasonable basis to quantitatively compare a human life versus (as an example) cow lifes - one-to-ten? one-to-thousand? one-to-all-the-cows-in-the-world? Frankly, I've got no idea. I've given it some thought but I can't imagine a way how to get to an order of magnitude estimate that would feel reasonable to me.

Comment author: DuncanS 22 November 2012 12:56:33AM *  -2 points [-]

Because of what you can do with a train of thought.

"That mammoth is very dangerous, but would be tasty if I killed it."

"I could kill it if I had the right weapon"

"What kind of weapon would work?"

As against.... "That mammoth is very dangerous - run!"

Computer science is where this particular insight comes from. If you can lay down memories, execute loops and evaluate conditions, you can simulate anything. If you don't have the ability to read your own output, you can't.

If dolphins or chimps did have arbitrarily long chains of thought, they'd be able to do general reasoning, as we do.

Comment author: PeterisP 26 November 2012 11:52:14AM 2 points [-]

The examples of corvids designing and making specialized tools after observing what they would need to solve specific problems (placement of an otherwise unaccessible treat) seem to demonstrate such chains of thought.

Comment author: [deleted] 25 November 2012 04:56:16PM *  6 points [-]

A better statement of this idea would be "If the probability of X is p(X), I want the proportion of people who tell me X is true to be p(X)".

Er... if p(anthropogenic global warning is occurring | all publicly available evidence) is 85%, I'm not sure what I want is 85% of the people to tell me anthropogenic global warning is occurring and 15% of the people to tell me it's not.

Comment author: PeterisP 26 November 2012 11:19:55AM *  4 points [-]

Why not?

Of course, the best proportion would be 100% of people telling me that p(thewarming)=85%; but if we limit the outside opinions to simple yes/no statements, then having 85% telling 'yes' and 15% telling 'no' seems to be far more informative than 100% of people telling 'yes' - as that would lead me to very wrongly assume that p(thewarming) is the same as p(2+2=4).

Comment author: TraderJoe 06 July 2012 08:11:32AM 5 points [-]

The results: If you tell the second player that the first player defected, 3% still cooperate (apparently 3% of people are Jesus). If you tell the second player that the first player cooperated.........only 16% cooperate.

Is there really anything exceptional in the 3% figure? 3% of people facing a player who chose "Foe" preferred to transfer money from the game show owners to that player. 97% preferred the game show owners to keep the money. If anything, 3% is below what I would have expected. More surprising [IMO] is the fact that 16% co-operate when they know that it costs them to do so. I have no idea what that 16% were thinking.

Comment author: PeterisP 06 July 2012 10:22:34PM 9 points [-]

The participants don't know the rules, and have been given a hint that they don't know the rules - as the host said that the choices will be independent/hidden, but then is telling you the other contestant's choice. So they can easily assume a chance that the host is lying, or might then give the first contestant a chance to switch his choice, etc.

View more: Prev | Next