Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by MugaSofer on On Caring
Comment author: dthunt 17 October 2014 06:08:22PM 2 points [-]

Hey, I just wanted to chime in here. I found the moral argument against eating animals compelling for years but lived fairly happily in conflict with my intuitions there. I was literally saying, "I find the moral argument for vegetarianism compelling" while eating a burger, and feeling only slightly awkward doing so.

It is in fact possible (possibly common) for people to 'reason backward' from behavior (eat meat) to values ("I don't mind large groups of animals dying"). I think that particular example CAN be consistent with your moral function (if you really don't care about non-human animals very much at all) - but by no means is that guaranteed.

In response to comment by dthunt on On Caring
Comment author: MugaSofer 18 October 2014 05:32:29PM 2 points [-]

That's a good point. Humans are disturbingly good at motivated reasoning and compartmentalization on occasion.

In response to comment by MugaSofer on On Caring
Comment author: dthunt 17 October 2014 06:08:22PM 2 points [-]

Hey, I just wanted to chime in here. I found the moral argument against eating animals compelling for years but lived fairly happily in conflict with my intuitions there. I was literally saying, "I find the moral argument for vegetarianism compelling" while eating a burger, and feeling only slightly awkward doing so.

It is in fact possible (possibly common) for people to 'reason backward' from behavior (eat meat) to values ("I don't mind large groups of animals dying"). I think that particular example CAN be consistent with your moral function (if you really don't care about non-human animals very much at all) - but by no means is that guaranteed.

In response to comment by dthunt on On Caring
Comment author: MugaSofer 18 October 2014 05:07:07PM *  0 points [-]

Double-post.

In response to comment by MugaSofer on On Caring
Comment author: SaidAchmiz 13 October 2014 04:21:48PM 1 point [-]

you have to sort of act as if your brain was screaming that loudly even when your brain doesn't have a voice that loud.

Why should I act this way?

In response to comment by SaidAchmiz on On Caring
Comment author: MugaSofer 18 October 2014 04:02:07PM *  -1 points [-]

To better approximate a perfectly-rational Bayesian reasoner (with your values.)

Which, presumably, would be able to model the universe correctly complete with large numbers.

That's the theory, anyway. Y'know, the same way you'd switch in a Monty Haul problem even if you don't understand it intuitively.

In response to On Caring
Comment author: Kyrorh 09 October 2014 06:20:47AM 2 points [-]

Many of us go through life understanding that we should care about people suffering far away from us, but failing to.

That is the thing that I never got. If I tell my brain to model a mind that cares, it comes up empty. I seem to literally be incapable of even imagining the thought process that would lead me to care for people I don't know.

If anybody knows how to fix that, please tell me.

In response to comment by Kyrorh on On Caring
Comment author: MugaSofer 10 October 2014 01:57:39PM -2 points [-]

I think this is the OP's point - there is no (human) mind capable of caring, because human brains aren't capable of modelling numbers that large properly. If you can't contain a mind, you can't use your usual "imaginary person" modules to shift your brain into that "gear".

So - until you find a better way! - you have to sort of act as if your brain was screaming that loudly even when your brain doesn't have a voice that loud.

In response to On Caring
Comment author: kilobug 09 October 2014 12:27:04PM 12 points [-]

Interesting article, sounds a very good introduction to scope insensitivity.

Two points where I disagree :

  1. I don't think birds are a good example of it, at least not for me. I don't care much for individual birds. I definitely wouldn't spend $3 nor any significant time to save a single bird. I'm not a vegetarian, it would be quite hypocritical for me to invest resources in saving one bird for "care" reasons and then going to eat a chicken at dinner. On the other hand, I do care about ecological disasters, massive bird death, damage to natural reserves, threats to a whole specie, ... So a massive death of birds is something I'm ready to invest resources to prevent, but not a single death of bird.

  2. I know it's quite taboo here, and most will disagree with me, but to me, the answer to how big the problems are is not charity, even "efficient" charity (which seems a very good idea on paper but I'm quite skeptical about the reliability of it), but more into structural changes - politics. I can't fail to notice that two of the "especially virtuous people" you named, Gandhi and Mandela, both were active mostly in politics, not in charity. To quote another one often labeled "especially virtuous people", Martin Luther King, "True compassion is more than flinging a coin to a beggar. It comes to see that an edifice which produces beggars needs restructuring."

In response to comment by kilobug on On Caring
Comment author: MugaSofer 10 October 2014 01:47:24PM 4 points [-]

I'm not a vegetarian, it would be quite hypocritical for me to invest resources in saving one bird for "care" reasons and then going to eat a chicken at dinner.

This strikes me as backward reasoning - if your moral intuitions about large numbers of animals dying are broken, isn't it much more likely that you made a mistake about vegetarianism?

(Also, three dollars isn't that high a value to place on something. I can definitely believe you get more than $3 worth of utility from eating a chicken. Heck, the chicken probably cost a good bit more than $3.)

Comment author: Adele_L 26 September 2014 05:22:05PM *  2 points [-]

You should send a message to Viliam Bur.

Comment author: MugaSofer 27 September 2014 11:33:47AM 0 points [-]

Thank you!

Comment author: MugaSofer 26 September 2014 03:24:52PM 1 point [-]

So ... I suspect someone might be doing that mass-downvote thing again. (To me, at least.)

Where do I go to inform a moderator so they can check?

Comment author: MugaSofer 26 September 2014 03:13:55PM 0 points [-]

Hey, I've listened to a lot of ideas labelled "dangerous", some of which were labeled "extremely dangerous". Haven't gone crazy yet.

I'd definitely like to discuss it with you privately, if only to compare your idea to what I already know.

Comment author: KnaveOfAllTrades 06 September 2014 07:45:47PM 2 points [-]

I'm not sure if it's because I'm Confused, but I'm struggling to understand if you are disagreeing, or if so, where your disagreement lies and how the parent comment in particular relates to that disagreement/the great-grandparent. I have a hunch that being more concrete and giving specific, minimally-abstract examples would help in this case.

Comment author: MugaSofer 07 September 2014 04:56:31PM *  0 points [-]

I'm saying that if Sleeping Beauty's goal is to better understand the world, by performing a Bayesian update on evidence, then I think this is a form of "payoff" that gives Thirder results.

From If a tree falls on Sleeping Beauty...:

Each interview consists of one question, “What is your credence now for the proposition that our coin landed heads?”, and the answer given will be scored according to a logarithmic scoring rule, with the aggregate result corresponding to the number of utilons (converted to dollars, let’s say) she will be penalized after the experiment.

In this case it is optimal to bet 1/3 that the coin came up heads, 2/3 that it came up tails: [snip table]

Comment author: KnaveOfAllTrades 06 September 2014 06:48:09PM *  1 point [-]

I don't understand the first part of your comment. Different anthropic principles give different answers to e.g. Sleeping Beauty, and the type of dissolution that seems most promising for that problem doesn't feel like what I'd call 'using anthropic evidence'. (The post I just linked to in particular seems like a conceptual precursor to updateless thinking, which seems to me like the obviously correct perfect-logically-omniscient-reasoner solution to anthropics.)

Can you give a concrete example of what you see as an example of where anthropic reasoning wins (or would win if we performed a simple experiment)? If anything, experiments seem like they would highlight ambiguities that naïve anthropic reasoning misses; if I try to write 'halfer' and 'thirder' computer programs for Sleeping Beauty to see which wins more, I run into the problem of defining the payoffs and thereby rederive the dissolution ata gave in the linked post.

Comment author: MugaSofer 06 September 2014 07:10:14PM *  0 points [-]

I don't understand the first part of your comment. Different anthropic principles give different answers to e.g. Sleeping Beauty, and the type of dissolution that seems most promising for that problem doesn't feel like what I'd call 'using anthropic evidence'. (The post I just linked to in particular seems like a conceptual precursor to updateless thinking, which seems to me like the obviously correct perfect-logically-omniscient-reasoner solution to anthropics.)

OK, well by analogy, what's the "payoff structure" for nuclear anthropics?

Obviously, we can't prevent it after the fact. The payoff we get for being right is in the form of information; a better model of the world.

It isn't perfectly analogous, but it seems to me that "be right" is most analogous to the Thirder payoff matrix for Sleeping-Beauty-like problems.

View more: Next