You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

mwengler comments on Open thread, August 19-25, 2013 - Less Wrong Discussion

2 Post author: David_Gerard 19 August 2013 06:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (325)

You are viewing a single comment's thread.

Comment author: mwengler 21 August 2013 06:50:29PM *  4 points [-]

We wonder about the moral impact of dust specks in the eyes of 3^^^3 people.

What about dust specks in the eyes of 3^^^3 poodles? Or more to the point, what is the moral cost of killing one person vs one poodle? How many poodles lives would we trade for the life of one person?

Or even within humans, is it human years we would account in coming up with moral equivalencies? Do we discount humans that are less smart, on the theory that we almost certainly discount poodles against humans because they are not as smart as us? Do we discount evil humans compared to helpful humans? Discount unproductive humans against productive ones? What about sims, if it is human*years we count rather than human lives, what of a sim which might be expected to run for more than a trillion subjective years in simulation, do they carry billions times more moral weight than a single meat human who has precommitted to eschew cryonics or upload?

And of course I am using poodle as an algebraic symbol to represent any one of many intelligences. Do we discount poodles against humans because they are not as smart, or is there some other measure of how to relate the moral value of a poodle to the moral value of a person? Does a sim (simulated human running in software) count equal to a meat human? Does an earthworm have epsilon<<1 times the worth of a human, or is it identically 0 times the worth of a human?

What about really big smart AI? Would an AI as smart as an entire planet be worth (morally) preserving at the expense of losing one-fifth the human population?

Comment author: wedrifid 22 August 2013 02:26:19AM 3 points [-]

What about dust specks in the eyes of 3^^^3 poodles? Or more to the point, what is the moral cost of killing one person vs one poodle? How many poodles lives would we trade for the life of one person?

I observe that the answer to the last question is not constrained to be positive.

Comment author: Randy_M 23 August 2013 03:49:15PM 4 points [-]

"Letting those people die was worth it, because they took their cursed yapping poodle with them!"

(quote marks to indicate not my actual views)

Comment author: David_Gerard 21 August 2013 07:06:01PM 0 points [-]

Do the nervous systems of 3^^^3 nematodes beat the nervous systems of a mere 7x10^9 humans? If not, why not?

Comment author: Eliezer_Yudkowsky 21 August 2013 07:55:00PM 6 points [-]

I believe that I care nothing for nematodes, and that as the nervous systems at hand became incrementally more complicated, I would eventually reach a sharp boundary wherein my degree of caring went from 0 to tiny. Or rather, I currently suspect that an idealized version of my morality would output such.

Comment author: ahbwramc 22 August 2013 11:28:20PM 5 points [-]

I'm kind of curious as to why you wouldn't expect a continuous, gradual shift in caring. Wouldn't mind design space (which I would imagine your caring to be a function of) be continuous?

Comment author: Eliezer_Yudkowsky 23 August 2013 12:58:16AM 7 points [-]

Something going from 0 to 10^-20 is behaving pretty close to continuously in one sense. It is clear that there are some configurations of matter I don't care about at all (like a paperclip), while I do care about other configurations (like twelve-year-old human children), so it is elementary that at some point my utility function must go from 0 to nonzero. The derivative, the second derivative, or even the function itself could easily be discontinuous at this point.

Comment author: MugaSofer 23 August 2013 03:57:07PM *  -1 points [-]

It is clear that there are some configurations of matter I don't care about at all (like a paperclip), while I do care about other configurations (like twelve-year-old human children), so it is elementary that at some point my utility function must go from 0 to nonzero.

And ... it isn't clear that there are some configurations you care for ... a bit? Sparrows being tortured and so on? You don't care more about dogs than insects and more for chimpanzees than dogs?

(I mean, most cultures have a Great Chain Of Being or whatever, so surely I haven't gone dreadfully awry in my introspection ...)

Comment author: Eliezer_Yudkowsky 23 August 2013 06:46:54PM 3 points [-]

This is not incompatible with what I just said. It goes from 0 to tiny somewhere, not from 0 to 12-year-old.

Comment author: shminux 23 August 2013 06:59:24PM 0 points [-]

Can you bracket this boundary reasonably sharply? Say, mosquito: no, butterfly: yes?

Comment author: Eliezer_Yudkowsky 23 August 2013 08:34:30PM 10 points [-]

No, but I strongly suspect that all Earthly life without frontal cortex would be regarded by my idealized morals as a more complicated paperclip. There may be exceptions and I have heard rumors that octopi pass the mirror test, and I will not be eating any octopus meat until that is resolved, because even in a world where I eat meat because optimizing my diet is more important and my civilization lets me get away with it, I do not eat anything that recognizes itself in a mirror. So a spider is a definite no, a chimpanzee is an extremely probable yes, a day-old human infant is an extremely probable no but there are non-sentience-related causes for me to care in this case, and pigs I am genuinely unsure of.

Comment author: Eliezer_Yudkowsky 24 August 2013 12:54:16AM 6 points [-]

To be clear, I am unsure if pigs are objects of value, which incorporates both empirical uncertainty about their degree of reflectivity, philosophical uncertainty about the precise relation of reflectivity to degrees of consciousness, and ethical uncertainty about how much my idealized morals would care about various degrees of consciousness to the extent I can imagine that coherently. I can imagine that there's a sharp line of sentience which humans are over and pigs are under, and imagine that my idealized caring would drop to immediately zero for anything under the line, but my subjective probability for both of these being simultaneously true is under 50% though they are not independent.

However it is plausible to me that I would care exactly zero about a pig getting a dust speck in the eye... or not.

Comment author: Emile 25 August 2013 12:11:03PM *  1 point [-]

I do not eat anything that recognizes itself in a mirror.

Assuming pigs were objects of value, would that make it morally wrong to eat them? Unlike octopi, most pigs exist because humans plan on eating them, so if a lot of humans stopped eating pigs, there would be less pigs, and the life of the average pig might not be much better.

(this is not a rhetorical question)

Comment author: fubarobfusco 25 August 2013 11:37:53PM 0 points [-]

Does it matter to you that octopuses are quite commonly cannibalistic?

Comment author: Bakkot 24 August 2013 06:48:08PM 1 point [-]

The derivative, the second derivative, or even the function itself could easily be discontinuous at this point.

But needn't be! See for example f(x) = exp(-1/x) (x > 0), 0 (x ≤ 0).

Wikipedia has an analysis.

(Of course, the space of objects isn't exactly isomorphic to the real line, but it's still a neat example.)

Comment author: Eliezer_Yudkowsky 24 August 2013 07:11:47PM 1 point [-]

Agreed, but it is not obvious to me that my utility function needs to be differentiable at that point.

Comment author: Armok_GoB 27 August 2013 08:09:04PM 0 points [-]

I dispute that; the paperclip is almost certainly either more or less likely to become a Boltzmann brain than an equivalent volume of vacuum.

Comment author: David_Gerard 21 August 2013 10:22:52PM *  1 point [-]

But zero is not a probability.

Edit: Adele_L is right, I was confusing utilities and probabilities.

Comment author: Adele_L 22 August 2013 12:04:33AM 13 points [-]

Zero is a utility, and utilities can even be negative (i.e. if Eliezer hated nematodes).

Comment author: MugaSofer 23 August 2013 03:40:50PM 0 points [-]

... are you pointing out that there is a nonzero probability that Eliezer's CEV actually cares about nematodes?

Comment author: David_Gerard 24 August 2013 04:15:40PM 1 point [-]

No, Adele_L is right, I was confusing utilities and probabilities.

Comment author: MugaSofer 23 August 2013 03:40:07PM 0 points [-]

... really?

Um, that strikes me as very unlikely. Could you elaborate on your reasoning?

Comment author: Armok_GoB 27 August 2013 08:04:25PM *  0 points [-]

Keyword here is believe. What probability do you assign?

And if you say epsilon or something like that, is the epsilon bigger or smaller than 1/(3^^^3/10^100)?