Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: teageegeepea 23 November 2013 05:43:52AM 16 points [-]

I tend to dismiss Steven Landsburg's critique of the standard interpretation of experiments along the lines of the Ultimatum Game, since nobody really thinks it through like him. But I actually did think about it when taking this survey (which is not the same as saying it affected my response).

Comment author: ygert 25 August 2013 04:54:12AM 0 points [-]

The obvious way to avoid this is to weight each person by their measure, e.g. the amount of time they spend alive.

Comment author: teageegeepea 25 August 2013 09:58:31PM 0 points [-]

I think total utilitarianism already does that.

In response to comment by [deleted] on Humans are utility monsters
Comment author: Ghatanathoah 23 August 2013 07:40:52PM *  2 points [-]

More repugnant than that is that naive average utilitarianism would seem to say that killing the least happy person in the world is a good thing, no matter how happy they are.

This can be resolved by taking a timeless view of the population, so that someone still counts as part of the average even after they die. This neatly resolves the question you asked Eliezer earlier in the thread, "If you prefer no monster to a happy monster why don't you kill the monster." The answer is that once the monster is created it always exists in a timeless sense. The only way for there to be "no monster" is for it to never exist in the first place.

That still leaves the most repugnant conclusion of naive average utilitarianism, namely that it states that, if the average utility is ultranegative (i.e., everyone is tortured 24/7), creating someone with slightly less negative utility (ie they are tortured 23/7) is better than creating nobody.

In my view average utilitarianism is a failed attempt to capture a basic intuition, namely that a small population of high utility people is sometimes better than a large one of low utility people, even if the large population's total utility is higher. "Take the average utility of the population" sounds like an easy and mathematically rigorous to express that intuition at first, but runs into problems once you figure out "munchkin" ways to manipulate the average, like adding moderately miserable people to a super-miserable world..

In my view we should keep the basic intuition (especially the timeless interpreation of it), but figure out a way to express it that isn't as horrible as AU.

Comment author: teageegeepea 25 August 2013 03:34:39AM 0 points [-]

If I kill someone in their sleep so they don't experience death, and nobody else is affected by it (maybe it's a hobo or something), is that okay under the timeless view because their prior utility still "counts"?

Comment author: teageegeepea 25 August 2013 03:23:56AM -1 points [-]

The human vs animal issue makes more sense if we focus not on "utility" but "asskicking".

Comment author: teageegeepea 25 August 2013 03:18:57AM *  2 points [-]

I thought #3 was the definition of "agent", which I suppose is why it got that label. #1 sounds a little like birds confronted by cuckoo parasitism, which Eliezer might call "sphexish" rather than agenty.

Comment author: teageegeepea 16 July 2013 10:54:46PM 0 points [-]

Does the bit on Gorbachev contain any references to Timur Kuran's work on preference falsification & cascades?

Comment author: teageegeepea 16 July 2013 10:03:33PM *  1 point [-]

2: An outside view works best when using a reference class with a similar causal structure to the thing you're trying to predict. An inside view works best when a phenomenon's causal structure is well-understood, and when (to your knowledge) there are very few phenomena with a similar causal structure that you can use to predict things about the phenomenon you're investigating. See: The Outside View's Domain.

When writing a textbook that's much like other textbooks, you're probably best off predicting the cost and duration of the project by looking at similar textbook-writing projects. When you're predicting the trajectory of the serial speed formulation of Moore's Law, or predicting which spaceship designs will successfully land humans on the moon for the first time, you're probably best off using an (intensely informed) inside view.

Is there data/experiments on when each gives better predictions, as with Kahneman's original outside view work?

Comment author: teageegeepea 09 July 2013 11:52:52PM 1 point [-]

There's a bloggingheads episode on the marshmallow experiment, and its variations, here.

Comment author: Maha 10 December 2012 01:23:38PM 0 points [-]

Small terminology gripe on the fifth paragraph - "men's rights activist" is, as far as I know, that group's nomenclature of choice, while very few feminists would self-identify as "radical". Comes off as slightly non-neutral.

Comment author: teageegeepea 11 December 2012 02:34:20AM 1 point [-]

Centrists view "radical" as a derogatory term, but I've come across lots of folks who embrace it.

Comment author: cousin_it 10 December 2012 05:45:05PM 2 points [-]

Sorry for the labeling then. In any case, I like your writings a lot.

Comment author: teageegeepea 11 December 2012 02:32:12AM 3 points [-]

Just because I don't like the label, doesn't mean it's inapt!

View more: Next