Comment author: pjeby 08 May 2015 07:49:48PM 11 points [-]

yeah, that's not going to help

It won't help the situation, but it might help you to better handle the situation. The useful thing about "prayer" isn't that it actually calls down any outside help, but that it forces you to clarify your own thoughts regarding what you want and what would be useful... in much the same way that problem solving is made easier by explaining the problem to somebody else.

Verbal communication forces you to serialize your thoughts, to disassemble what may be a vague or complex structure of interconnecting impulses, ideas, mental models, etc. and then encode it in an organized stream for another mind to re-encode into a similar structure. But the process of doing this forces you to re-encode it as well.

So don't stop using a useful technique for organizing your thoughts, just because there isn't an actual mind on the other end of the encoding process (except maybe yours). Programmers have been known to "rubber duck", i.e., use a literal or figurative rubber duck as the thing to talk to. You're not going to commit some sort of atheist sin by using an imaginary sky deity as your rubber duck. Or ask the Flying Spaghetti Monster to touch you with His Noodly Appendage to grant you the clarity and wisdom you seek. The value of an invocation comes from its invoker, not its invokee.

Comment author: elspood 14 May 2015 12:24:04AM -1 points [-]

Reading this reply, I was immediately reminded of a situation described by Jen Peeples, I think in an episode of The Atheist Experience, about her co-pilot's reaction of prayer during a life-threatening helicopter incident. ( This Comment is all I could find as reference. )

Unless your particular prayer technique is useful for quickly addressing emergency situations, you probably don't want to be in the habit of relying on it as a general practice. I think the "rubber duck" Socratic approach could still be useful, so this isn't a disagreement with your entire comment, just a warning about possible failure modes.

Comment author: Eliezer_Yudkowsky 20 January 2009 09:22:22AM 14 points [-]

Steppenwolf, I thought about "north" and "south" but I didn't want any arguments over who got to be on top. So I used "east" and "west" instead.

In response to your main point... either (a) you're sympathizing with something nonsentient that doesn't actually have any feelings - either deceiving yourself into caring about a person who doesn't exist, or changing the value itself. Or (b) you're losing out not only on present human sympathy, but on future extensions of sympathy, the telepathic bond between lovers a la Mercedes Lackey and/or Greg Egan.

Being in a holodeck, and knowing that the people around you aren't real, has to change either your feelings or your values. That's the problem with the volcano lair, if there's no one there who's real except you. That's the simplicity I fear.

Nazgul, the "comparative standard of living" thing is one of few parts of human nature that I would seriously consider eliminating outright (see Continuous Improvement). But the environmental solution would be, indeed, nonsentient human-shaped entities of lower status, to tell your brain that you're in the elite. Though I don't know if that works - we may have a brain category for nonpeople we don't even compete with hedonically.

Comment author: elspood 19 February 2015 09:26:31PM 0 points [-]

Isn't there a separate axis for every aspect of human divergence? Maybe this was already explicit in asking if there is anything more complicated that romance for "multiplayer" relationships, but really this problem seems fully general: politics, or religion, or food, or any other preference that has a distribution among humans could be a candidate for creating schism (or indeed all axes at once). "Catgirl for romance" is one very specific failure mode, but the general one could be called "an echo chamber for every mind".

The expected result (for a mind that knows the genesis of the catpeople) is that eventually the catpersons will get boring, but Fun Theory still ought to allow for exploration of that territory as long as it allows a safe path of retreat back into the world of other minds. The important thing here seems to be that we must never be allowed to have catpeople without knowing their true nature (which seems to be a form of wireheading).

Comment author: elspood 11 August 2013 06:56:47PM 2 points [-]

It was hard to muster a proper sense of indignation when you were confronting the same dignified witch who, twelve years and four months earlier, had given both of you two weeks' detention after catching you in the act of conceiving Tracey.

Given the fact that there is a Tracey, then that act of conception must have completed. So, either McGonagall caught them at exactly the right moment, or the Davises had just kept on going after they were caught...

No matter how it happened, this scene must have played out hilariously.

In response to comment by [deleted] on Philosophical Landmines
Comment author: TheOtherDave 09 February 2013 02:56:20PM 8 points [-]

Do you have a real example of deontology outperforming consequentialism IRL?

I'm not sure what that would look like. If consequentialism and deontology shared a common set of performance metrics, they would not be different value systems in the first place.

For example, I would say "Don't torture people, no matter what the benefits of doing so are!" is a fine example of a deontological injunction. My intuition is that people raised with such an injunction are less likely to torture people than those raised with the consequentialist equivalent ("Don't torture people unless it does more good than harm!"), but as far as I know the study has never been done.

Supposing it is true, though, it's still not clear to me what is outperforming what in that case. Is that a point for deontological injunctions, because they more effectively constrain behavior independent of the situation? Or a point for consequentialism, because it more effectively allows situation-dependent judgments?

Comment author: elspood 11 February 2013 07:43:16PM 1 point [-]

If consequentialism and deontology shared a common set of performance metrics, they would not be different value systems in the first place.

At least one performance metric that allows for the two systems to be different is: "How difficult is the value system for humans to implement?"

Comment author: nshepperd 03 February 2013 02:42:58PM *  0 points [-]
  1. You can't just multiply B by some probability factor. For the situation where you have p(B) = x, p(C) = 1 - x, your expected utility would be xB + (1-x)C. But xB by itself is meaningless, or equivalent to the assumption that the utility of the alternative (which has probability 1 - x) is the magic number 0. "1/400 chance of a whale day" is meaningless until you define the alternative that happens with probability 399/400.

  2. For the purpose of calculating xB + (1-x)C you obviously need to know the actual values, and hence magnitudes of x, B and C. Similarly you need to know the actual values in order to calculate whether A < B or not. "Radiation poisoning for looking at magnitude of utility" really means that you're not allowed to compare utilities to magic numbers like 0 or 1. It means that the only thing you're allowed to do with utility values is a) compare them to each other, and b) obtain expected utilities by multiplying by a probability distribution.

Comment author: elspood 03 February 2013 09:51:31PM *  1 point [-]

[edited out emotional commentary/snark]

  1. If you can't multiply B by a probability factor, then it's meaningless in the context of xB + (1-x)C, also. xB by itself isn't meaningless; it roughly means "the expected utility on a normalized scale between the utility of the outcome I least prefer and the outcome I most prefer". nyan_sandwich even agrees that 0 and 1 aren't magic numbers, they're just rescaled utility values.
  2. I'm 99% confident that that's not what nyan_sandwich means by radiation poisoning in the original post, considering the fact that comparing utilities to 0 and 1 is exactly what he does in the hell example. If you're not allowed to compare utilities by magnitude, then you can't obtain an expected utility by multiplying by a probability distribution. Show the math if you think you can prove otherwise.

It's getting hard to reference back to the original post because it keeps changing with no annotations to highlight the edits, but I think the only useful argument in the radiation poisoning section is: "don't use units of sandwiches, whales, or orgasms because you'll get confused by trying to experience them". However, I don't see any good argument for not even using Utils as a unit for a single person's preferences. In fact, using units of Awesomes seems to me even worse than Utils, because it's easier to accidentally experience an Awesome than a Util. Converting from Utils to unitless measurement may avoid some infinitesimal amount of radiation poisoning, but it's no magic bullet for anything.

Comment author: [deleted] 02 February 2013 09:51:06AM 1 point [-]

if my utility function violates transitivity or other axioms of VNM

then it's not a utility function in the standard sense of the term.

In response to comment by [deleted] on Pinpointing Utility
Comment author: elspood 02 February 2013 08:24:02PM 0 points [-]

I think what you mean to tell me is: "say 'my preferences' instead of 'my utility function'". I acknowledge that I was incorrectly using these interchangeably.

I do think it was clear what I meant when I called it "my" function and talked about it not conforming to VNM rules, so this response felt tautological to me.

Comment author: nshepperd 02 February 2013 04:16:31PM *  2 points [-]

There's something missing here, which is that "1/400 chance of a whale day" means "1/400 chance of whale + 399/400 chance of normal day". To calculate the value of "1/400 chance of a whale day" you need to assign a utility for both a whale day and a normal day. Then you can compare the resulting expectation of utility to the utility of a sandwhich = 1/500 (by which we mean a sandwich day, I guess?), no sweat.

The absolute magnitudes of the utilities don't make any difference. If you add N to all utility values, that just adds N to both sides of the comparison. (And you're not allowed to compare utilities to magic numbers like 0, since that would be numerology.)

Comment author: elspood 02 February 2013 07:54:28PM 0 points [-]

I notice we're not understanding each other, but I don't know why. Let's step back a bit. What problem is "radiation poisoning for looking at magnitude of utility" supposed to be solving?

We're not talking about adding N to both sides of a comparison. We're talking about taking a relation where we are only allowed to know that A < B, multiplying B by some probability factor, and then trying to make some judgment about the new relationship between A and xB. The rule against looking at magnitudes prevents that. So we can't give an answer to the question: "Is the sandwich day better than the expected value of 1/400 chance of a whale day?"

If we're allowed to compare A to xB, then we have to do that before the magnitude rule goes into effect. I don't see how this model is supposed to account for that.

Comment author: [deleted] 02 February 2013 06:29:44AM 0 points [-]

see this

tl;dr: don't dereference "awesome" in verbal-logical mode.

In response to comment by [deleted] on Pinpointing Utility
Comment author: elspood 02 February 2013 07:24:36PM 0 points [-]

It's too late for me. It might work to tell the average person to use "awesomeness" as their black box for moral reasoning as long as they never ever look inside it. Unfortunately, all of us have now looked, and so whatever value it had as a black box has disappeared.

You can't tell me now to go back and revert to my original version of awesome unless you have a supply of blue pills whenever I need them.

If the power of this tool evaporates as soon as you start investigating it, that strikes me as a rather strong point of evidence against it. It was fun while it lasted, though.

In response to comment by [deleted] on Pinpointing Utility
Comment author: [deleted] 02 February 2013 04:42:18PM -2 points [-]

That said, it's interesting that people react to the thought of rape and torture, but not the universe getting paperclipped, which is many many orders of magnitude worse.

I get more angry at a turtle getting thrown against the wall than I do at genocides... I guess some things just hit you hard out of proportion to their actual value.

Ooops, you tried to feel a utility. Go directly to type theory hell; do not pass go, do not collect 200 utils.

In response to comment by [deleted] on Pinpointing Utility
Comment author: elspood 02 February 2013 07:07:06PM *  0 points [-]

Ooops, you tried to feel a utility. Go directly to type theory hell; do not pass go, do not collect 200 utils.

I don't think this example is evidence against trying to 'feel' a utility. You didn't account for scope insensitivity and the qualitative difference between the two things you think you're comparing.

You need to compare the feeling of the turtle thrown against the wall to the cumulative feeling when you think about EACH individual beheading, shooting, orphaned child, open grave, and every other atrocity of the genocide. Thinking about the vague concept "genocide" doesn't use the same part of your brain as thinking about the turtle incident.

Comment author: [deleted] 02 February 2013 06:01:59AM *  1 point [-]

You are comparing 1/400 EU and 1/500 EU using their magnitudes

You are allowed to compare. Comparison is one of the defined operations. Comparison is how you decide which is best.

we want to have a normalized scale of utility to apply probability to.

I'm uneasy with this "normalized". Can you unpack what you mean here?

In response to comment by [deleted] on Pinpointing Utility
Comment author: elspood 02 February 2013 08:43:13AM *  0 points [-]

What I mean by "normalized" is that you're compressing the utility values into the range between 0 and 1. I am not aware of another definition that would apply here.

Your rule says you're allowed to compare, but your other rule says you're not allowed to compare by magnitude. You were serious enough about this second rule to equate it with radiation death.

You can't apply probabilities to utilities and be left with anything meaningful unless you're allowed to compare by magnitude. This is a fatal contradiction in your thesis. Using your own example, you assign a value of 1 to whaling and 1/500 to the sandwich. If you're not allowed to compare the two using their magnitude, then you can't compare the utility of 1/400 chance of the whale day with the sandwich, because you're not allowed to think about how much better it is to be a whale.

View more: Next