Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open Thread, February 15-29, 2012

3 Post author: OpenThreadGuy 15 February 2012 06:00AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Comments (194)

Comment author: DanielLC 15 February 2012 07:43:34AM 11 points [-]

I notice overconfidence bias and risk aversion seem to operate in opposite directions. Like, there's a 90% chance of something being true, you say it's 99% likely, and then you bet at 9 to 1 odds.

Do they tend to cancel? How well?

Comment author: sixes_and_sevens 15 February 2012 10:31:01AM 6 points [-]

A while ago Yvain posted on Prospect Theory, which I think is salient to your query.

Comment author: Konkvistador 15 February 2012 03:57:50PM 8 points [-]

Currently listening to the Grace-Hanson podcasts. Topics:

Comment author: HonoreDB 23 February 2012 08:59:24PM *  7 points [-]

I had a somewhat chaotic phase in my romantic life a few years ago, and I just had the thought that a lot of it could be modeled as a result of non-transitive preferences. Specifically,

C preferred being single to being with A.

C preferred being with W to being single.

C preferred being with A to being with W.

I think all three of us could have been spared some heartache if we had figured out that was what was going on.

Comment author: Alicorn 24 February 2012 12:33:54AM 3 points [-]

I am inappropriately curious for a narrative version of this fiasco.

Comment author: Unnamed 17 February 2012 01:30:28AM 7 points [-]

A proposed law to require psychologists who testify in court to dress like wizards:

When a psychologist or psychiatrist testifies during a defendant’s competency hearing, the psychologist or psychiatrist shall wear a cone-shaped hat that is not less than two feet tall. The surface of the hat shall be imprinted with stars and lightning bolts. Additionally, a psychologist or psychiatrist shall be required to don a white beard that is not less than 18 inches in length, and shall punctuate crucial elements of his testimony by stabbing the air with a wand. Whenever a psychologist or psychiatrist provides expert testimony regarding a defendant’s competency, the bailiff shall contemporaneously dim the courtroom lights and administer two strikes to a Chinese gong…

Comment author: Kaj_Sotala 15 February 2012 07:27:22AM *  7 points [-]

I'm coming to increasingly notice that maintaining a specific, regular sleep pattern is worth making sacrifices for. Specifically, if I go to bed around 10:30 PM and get up around 8 AM, I will wake up feeling energetic, productive and physically good. If I get up even a few hours later, or if I go to bed late but regardless get up at 8 in the morning, there's a very good chance that I will accomplish basically nothing on that day. It's weird how getting the timing so precisely correct seems to basically be the biggest determining factor in how my day will go.

I had noticed this before, but had frequently slipped from it, since most of my social events tend to be on evenings and maintaining these sleeping patterns while still having a social life was quite hard. But I'm now becoming convinced that those sacrifices are worth making: I'll just have to persuade my friends to be social at earlier times, or look for people who are already that.

Comment author: MileyCyrus 15 February 2012 07:48:20AM 3 points [-]

Have you tried modafinal?

Comment author: Kaj_Sotala 15 February 2012 08:09:00AM 4 points [-]

It's not prescribed in Finland without a special permit from the authorities, and I don't want to take the risk of trying to obtain something that's considered an illegal drug.

Comment author: MileyCyrus 15 February 2012 08:32:06AM 3 points [-]

My sympathies.

Comment author: AlexSchell 15 February 2012 12:12:22PM 2 points [-]

Do you use an alarm clock? If so, your problem might have less to do with sleep deprivation (which I don't think should cause the sort of acute effects you describe) and more with getting up at the wrong time within a sleep cycle. If you have an iPhone or iPod touch, give Sleep Cycle a try for avoiding this problem. I think there are similar apps for different platforms. If you're not using an alarm clock (or are already using something like Sleep Cycle), I'd be genuinely surprised.

Comment author: Kaj_Sotala 15 February 2012 02:33:41PM *  1 point [-]

I do use an alarm clock, but after going to bed at the right time for a couple of evenings, I start to wake up on my own, a little before the clock would sound. The alarm clock is just there as a backup, and to let me remain mostly-awake in bed for about 10-20 minutes longer before telling me to actually get up (as opposed to just getting awake).

ETA: I should specify that if I don't go to bed at the right time, I don't wake up naturally - well, I do, but so late that I'll feel groggy and generally inenergetic.

Comment author: AlexSchell 15 February 2012 10:39:55PM 1 point [-]

Hmm, I still don't know if I should be surprised or not, as I'm having trouble parsing your last sentence. When you go to bed late, do you not set your alarm clock? Or do you sleep through your alarm? Or do you wake up naturally (but groggy) right before the alarm goes off?

Comment author: Kaj_Sotala 16 February 2012 07:25:02AM *  1 point [-]

I have attempted:

A) Going to bed late and setting the alarm at the usual early time
B) Going to bed late and setting the alarm a couple of hours later
C) Going to bed late and not setting an alarm at all

With A, I'll wake to the clock but be groggy. With B I'm not necessarily so groggy but still not as energetic as I would have if I'd gone to bed early and woken up early. With C I'll wake up naturally at some late time and feel pretty lethargic.

I was about to say that there are two dimensions here - groggy/neutral/awake and energetic/neutral/lethargic. Very roughly, A leaves me groggy/neutral, B leaves me neutral/neutral and C leaves me neutral/lethargic. But that doesn't sound entirely right, either - all three often also tend to leave me an extra unspecified uncomfortable feeling that I can't quite put into words, and which might be part of what I'm calling "groggy" or "lethargic" in the above. (Going to bed on time and getting up early leaves me awake/energetic or at least neutral/energetic, as well as without that extra uncomfortable feeling.)

Comment author: ZankerH 15 February 2012 10:06:55AM *  6 points [-]

When working on a primarily mental task (example: web browsing, studying, programming), I sometimes find myself coming up with an idea, forgetting the idea itself, but remembering I have come up with it. Backtracking through the mental steps may help recall it, but often I'll not be able to recall it at all, ending in frustration. Is there a technical term for this I can google / does anyone have an idea what this is?

Comment author: Metus 15 February 2012 04:07:12PM 2 points [-]

I would also be interested in research regarding this topic. I "suffer" from a similar phenomenon. The most annoying part ist that I am unable to judge if it was a good or bad idea I forgot. Also, this phenomenon occurs more often if I am tired.

Comment author: ZankerH 15 February 2012 08:31:20PM 1 point [-]

The most annoying part ist that I am unable to judge if it was a good or bad idea I forgot.

Anecdote: Discussing this with a particularly non-rational acquaintance, they remarked that I'm likely subconsciously discarding horrible ideas, and preventing myself from coming up with them again, and therefore the better for it.

Comment author: [deleted] 17 February 2012 05:23:40PM 1 point [-]

I've had the same thing occur to me many times, especially once I went into college. However, I did an experiment that might help shed some light on the issue for you.

I attempted to brute force my way through the problem. I kept pens and note pads on hand, specifically sticky notes. When I had any idea I felt worth keeping, I'd jot it down on the spot. No context (so I wouldn't write down what I was doing or where I was) just the idea itself. I soon collected a wall of sticky notes (it became quite infamous in the dorms) full of these ideas. I still have them all, in a notebook full of card stock, organized by type.

The problem I find, going back over the many different ideas, is that, on the whole, the ideas have lost any inspiration they once had. Looking over them, I see the ideas as either a.) common knowledge (meaning the idea was probably new at the time but since then I've just grown used to through other routes of knowledge) or b.) trite and even childish.

So, if it helps, it would seem that your friend may be onto something as, for the most part, my wall of ideas serve either as reminders of things I already know or things that don't matter.

Comment author: John_Maxwell_IV 19 February 2012 08:06:41AM 0 points [-]

I read somewhere that furiously writing down everything you were thinking about is a good way to dredge up forgotten thoughts, and it sometimes works for me.

Comment author: [deleted] 25 February 2012 10:58:33AM *  4 points [-]

I've just seen the Wikipedia article for the ‘overwhelming gain paradox’:

Harford illustrates the paradox by the comparison of three potential job offers:

  • In Job 1, you will be paid $100, and if you work hard you will be paid $200.
  • In Job 2, you will be paid $100, and if you work hard you will have a 1% chance of being paid $200.
  • In Job 3, you will be paid $100, and if you work hard you will have a 1% chance of being paid $1billion.

Most people will state that they will choose to work hard in jobs 1 and 3, but not job 2 [2]. In Job 1, working hard is obvious because there is a clear reward for doing so. In Job 2, it seems a bad choice because the likelihood of a reward is so low. But in Job 3, working hard becomes the preferable choice, because the potential gain is so overwhelming that any chance - no matter how small - of obtaining it is seen as worthwhile. This appears irrational and paradoxical, because jobs 2 and 3 are identical 99% of the time.

Why the hell would anyone consider that a paradox? ISTM that it is completely reasonable for an utility function to be such that the disutility of working harder would be exceeded both by the utility of extra $100 and by 0.01 times the utility of extra $999,999,900, but not by 0.01 times the utility of extra $100. (If anything, I would consider anything else to be paradoxical, for people for whom the disutility of working at all is exceeded by the utility of getting $100 in the first place.)

Comment author: Grognor 16 February 2012 08:14:50AM *  4 points [-]

Scumbag brain is a newish meme of the generic image macro variety. Some are pretty entertaining and relevant to the LW ideaspace, but most are lowest common denominator-style "broke up with girlfriend, makes you feel sad about it for weeks".

Comment author: lsparrish 16 February 2012 02:04:12AM 7 points [-]

Why Life Extension is Immoral

Summary: Years of life are in finite supply. It is morally better that these be spread among relatively more people rather than concentrated in the hands of a relative few. Example: Most people would save a young child instead of an old person if forced to choose, and it is not not just because the baby has more years left, part of the reason is because it seems unfair for the young child to die sooner than the old person.

The argument would be limited to certain age ranges; an unborn fetus or newborn infant might justly be sacrificed to save a mature person (e.g. a mother) due to the fact that early development represents a costly investment on the part of adults which it is fair for them to expect payoff for (at least for adults who contribute to the rearing of offspring -- which could be indirect, etc.).

I think my rejection for the argument is that I don't think of future humans as objects of moral concern in quite all the same respects that I do for existing humans, even though they qualify in some ways. While I think future beings are entitled to not being tortured, I think they are not (at least not out of fairness with respect to existing humans) entitled to being brought into existence in the first place. Perhaps my reason for thinking this is that most humans that could exist do not, and many (e.g. those who would be in constant pain) probably should not.

On the other hand, I do think it is valuable for there to be people in the future, and this holds even if they can't be continuations of existing humans. (I would assign fairly high utility to a Star Trek kind of universe where all currently living humans are dead from old age or some other unstoppable cause but humanity is surviving.)

Comment author: rwallace 17 February 2012 02:31:02AM 8 points [-]

Example: Most people would save a young child instead of an old person if forced to choose, and it is not not just because the baby has more years left, part of the reason is because it seems unfair for the young child to die sooner than the old person.

As far as I'm concerned it is just because the baby has more years left. If I had to choose between a healthy old person with several expected years of happy and productive life left, versus a child who was terminally ill and going to die in a year regardless, I'd save the old person. It is unfair that an innocent person should ever have to die, and unfairness is not diminished merely by afflicting everyone equally.

Comment author: Thrasymachus 17 February 2012 03:39:30PM 0 points [-]

Suppose old person and child (perhaps better: young adult) would both gain 2 years, so we equalize payoff. What then? Why not be prioritarian at the margin of aggregate indifference?

Comment author: [deleted] 25 February 2012 11:10:46AM 0 points [-]

Well, young adults typically enjoy life more*, so...


* I've heard old people saying they wish they could become young again, but I haven't heard any young people saying they can't wait to become old.

Comment author: Thrasymachus 16 February 2012 03:57:42AM 4 points [-]

Hello there, I'm the guy who wrote the stuff you linked to.

I think it might be worth noting the Rawlsian issue too. If we pretend life is in a finite supply with efficient distribution between persons, then something like "if I extend my life to 10n then 9 other peeps who would have lived n years like me would not" will be true. The problem is this violates norms about what a just outcome is. If I put you and nine others behind a veil of ignorance and offered you an 'everyone gets 80 years' versus 'one of you gets 800, whilst the rest of you get nothing', I think basically everyone would go for everyone getting 80. One of the consequences of that would seem to be expecting whoever 'comes first' in the existence lottery to refrain from life extension to allow subsequent persons to 'have their go'.

If you don't buy that future persons are objects of moral concern, then the foregoing won't apply. But I think there are good reasons to treat them as objects of full moral concern (including a 'right'/'interest' in being alive in the first place). It seems weird (given B theory), that temporally remote people count for less, even though we don't think spatial distance is morally salient. Better, we generally intuit things like a delayed doomsday machine that euthanizes all intelligent life painlessly in a few hundred years is a very bad thing to do.

If you dislike justice (or future persons), there's a plausible aggregate-only argument (which bears a resemblance to Singer's work). Most things show diminishing marginal returns, and plausibly lifespan will too, at least after the investment period: 20 to 40 is worth more than 40-60, etc. If that's true, and lifespan is in finite supply, then we might get more utility by having many smaller lives rather than fewer longer ones suffering diminishing returns. The optimum becomes a tradeoff in minimizing the 'decay' of diminishing returns versus the cost sunk into development of a human being through childhood and adolescence. The optimal lifespan might be longer or shorter than three score and ten, but is unlikely to be really big.

Obviously, there are huge issues over population ethics and the status of future persons, as well as finer grained stuff re. justice across hypothetical individuals. Sadly, I don't have time to elaborate on this stuff before summertime. Happily, I am working on this sort of stuff for an elective in Oxford, so hopefully I'll have something better developed by then!

Comment author: RichardKennaway 17 February 2012 02:32:22PM *  5 points [-]

You lose me the moment you introduce the moral premise. Why is it better for two people to each live a million years than one to live two million? This looks superficially the same sort of question as "Why is it better for two people to each have a million dollars than for one to have two million?", but in the latter scenario, one person has two million while the other has nothing. In the lifetimes case, there is no other person. The moral premise presupposes that nonexistent people deserve some of other peoples' existence in the same way that existing paupers deserve some of other peoples' wealth.

You may have an argument to that effect, but I didn't see it in my speed-run through your slides (nice graphic style, BTW, how do you do that?) or in your comment above. Your argument that we place value on future people only considers our desire to avoid calamities falling upon existent future people.

Diminishing returns for longer lifespans is only a problem to be tackled if it happens. The only diminishing returns I see around me for the lifespans we have result from decline in health, not excess of experience.

Comment author: Thrasymachus 17 February 2012 03:59:10PM 0 points [-]

The nifty program is Prezi.

I didn't particularly fill in the valuing future persons argument - in my defence, it is a fairly common view in the literature not to discount future persons, so I just assumed it. If I wanted to provide reasons, I'd point to future calamities (which only seem plausibly really bad if future people have interests or value - although that needn't on be on a par with ours), reciprocity across time (in the same way we would want people in the past to weigh our interests equal to theirs when applicable, same applies to us and our successors), and a similar sort of Rawlsian argument that if we didn't know we would live now on in the future, the sort of deal we would strike would be those currently living (whoever they are) to weigh future interests equal to their own. Elaboration pending one day, I hope!

Comment author: Kaj_Sotala 16 February 2012 08:37:03AM *  5 points [-]

I find this argument incoherent, as I reject the idea of a person at the age of 1 being the same person as they are at the age of 800 - or for that manner, the idea of a person at the age of 400 being the same person as they are at the age of 401. In fact, I reject the idea of personal continuity in the first place, at least when looking at "fairness" at such an abstract level. I am not the same person as I was a minute ago, and indeed there are no persons at all, only experience-moments. Therefore there's no inherent difference in whether someone lives 800 years or ten people live 80 years. Both have 800 years worth of experience-moments.

I do recognize that "fairness" is still a useful abstraction on a societal level, as humans will experience feelings of resentment towards conditions which they perceive as unfair, as inequal outcomes are often associated with lower overall utility, and so forth. But even then, "fairness" is still just a theoretical fiction that's useful for maximizing utility, not something that would have actual moral relevance by itself.

As for the diminishing marginal returns argument, it seems inapplicable. If we're talking about the utility of a life (or a life-year), then the relevant variable would probably be something like happiness, but research on the topic has found age to be unrelated to happiness (see e.g. here), so each year seems to produce roughly the same amount of utility. Thus the marginal returns do not diminish.

Actually, that's only true if we ignore the resources needed to support a person. Childhood and old age are the two periods where people don't manage on their own, and need to be cared for by others. Thus, on a (utility)/(resources invested) basis, childhood and old age produce lower returns. Now life extension would eliminate age-related decline in health, so old people would cease to require more resources. And if people had fewer children, we'd need to invest fewer resources on them as well. So with life extension the marginal returns would be higher than with no life extension. Not only would the average life-year be as good as in the case with no life extension, we could support a larger population, so there would be many more life-years.

One could also make the argument that even if life extension wouldn't reduce the average amount of resources we'd need to support a person, it would still lead to increased population growth. Global trends currently show declining population growth all over the world. Developed countries will be the first ones to have their population drastically reduced (Japan's population began to decrease in 2005), but current projections seem to estimate that the developing world will follow eventually. Sans life extension, the future could easily be one of small populations and small families. With life extension, the future could still be one of small families, but it could be one of much larger populations as population growth would continue regardless. Instead of a planetary population of one billion people living to 80 each, we might have a planetary population of one hundred billion people living to 800 each. That would be no worse than no life extension on the fairness criteria, and much better on the experience-moments criteria.

Comment author: Thrasymachus 16 February 2012 05:44:19PM 0 points [-]

Hello Kaj,

If you reject both continuity of identity and prioritarianism, then there isn't much left for an argument to appeal to besides aggregate concerns, which lead to a host of empirical questions you outline.

However, if you think you should maximize expected value under normative uncertainty (and you aren't absolutely certain aggregate util or consequentialism is the only thing that matters), then there might be motive to revise your beliefs. If the aggregate concerns 'either way' turn out to be a wash between immortal society and 'healthy aging but die' society, then the justice/prioritarian concerns I point to might 'tip the balance' in favour of the latter even if you aren't convinced it is the right theory. What I'd hope to show is something like prioritarianism at the margin or aggregate indifference (ie. prefer 10 utils to 10 people instead of 100 to 1 and 0 to 9) is all that is needed to buy the argument.

Comment author: Kaj_Sotala 16 February 2012 07:43:46PM *  2 points [-]

If you reject both continuity of identity and prioritarianism, then there isn't much left for an argument to appeal to besides aggregate concerns, which lead to a host of empirical questions you outline.

True, and I probably worded my opening paragraph in an unnecessarily aggressive way, given that premises such as accepting/rejecting continuity aren't really correct or wrong as such. My apologies for that.

If there did exist a choice between two scenarios where the only difference related to your concerns, then I do find it conceivable - though maybe unlikely - that those concerns would tip the balance. But I wouldn't expect such a tight balance to manifest itself in any real-world scenarios. (Of course, one could argue that theoretical ethics shouldn't concern itself too much with worrying about its real world-relevance in the first place. :)

I'd still be curious to hear your opinion about the empirical points I mentioned, though.

Comment author: Thrasymachus 17 February 2012 03:50:11PM 0 points [-]

I'm not sure what to think about the empirical points.

If there is continuity of personal identity, then we can say that people 'accrue' life, and so there's plausibly diminishing returns. If we dismiss that and talk of experience moments, then a diminishing argument would have to say something like "experience-moments in 'older' lives are not as good as younger ones". Like you, I can't see any particularly good support for this (although I wouldn't be hugely surprised if it was so). However, we can again play the normative uncertainty card to just mean our expected degree of diminishing returns are attenuated by * P(continuity of identity)

I agree there are 'investment costs' in childhood, and if there are only costs in play, then our aggregate maximizer will want to limit them, and extending lifetime is best. I don't think this cost is that massive though between having it once per 80 years or once per 800 or similar. And if diminishing returns apply to age (see above), then it becomes a tradeoff.

Regardless, there are empirical situations where life-extension is strictly win-win: so if we don't have loads of children and so we never approach carrying capacity. I suspect this issue will be at most a near-term thing: our posthuman selves will assumedly tile the universe optimally. There are a host of counterveiling (and counter-counterveiling) concerns in the nearer term. I'm not sure how to unpick them.

Comment author: Kaj_Sotala 17 February 2012 07:30:34PM 2 points [-]

If there is continuity of personal identity, then we can say that people 'accrue' life, and so there's plausibly diminishing returns.

I'm not sure how this follows, even presuming continuity of personal identity.

If you were running a company, you might get diminishing returns in the number of workers if the extra workers would start to get in each other's way, or the amount of resources needed for administration increased at a faster-than-linear speed. Or if you were planting crops, you might get diminishing returns in the amount of fertilizer you used, since the plants simply could not use more than a certain amount of fertilizer effectively, and might even suffer from there being too much. But while there are various reasons for why you might get diminishing returns in different fields, I can't think of plausible reasons for why any such reason would apply to years of life. Extra years of life do not get in each other's way, and I'm not going to enjoy my 26th year of life less than my 20th simply because I've lived for a longer time.

Comment author: Thrasymachus 18 February 2012 08:36:14AM 0 points [-]

I was thinking something along the lines that people will generally pick the very best things, ground projects, or whatever to do first, and so as they satisfy those they have to go on to not quite so awesome things, and so on. So although years per se don't 'get in each others way', how you spend them will.

Obviously lots of counterveiling concerns too (maybe you get wiser as you age so you can pick even more enjoyable things, etc.)

Comment author: Kaj_Sotala 18 February 2012 02:17:54PM *  1 point [-]

That sounds more like diminishing marginal utility than diminishing returns. (E.g. money has diminishing marginal utility because we tend to spend money first on the things that are the most important for us.)

Your hypothesis seems to be implying that humans engage in activities that are essentially "used up" afterwards - once a person has had an awesome time writing a book, they need to move on to something else the next year. This does not seem right: rather, they're more likely to keep writing books. It's true that it will eventually get harder and harder to find even more enjoyable activities, simply because there's an upper limit to how enjoyable an activity can be. But this doesn't lead to diminishing marginal utility: it only means that the marginal utility of life-years stops increasing.

For example, suppose that somebody's 20. At this age they might not know themselves very well, doing some random things that only give them 10 hedons worth of pleasure a year. At age 30, they've figured out that they actually dislike programming but love gardening. They spend all of their available time gardening, so they get 20 hedons worth of pleasure a year. At age 40 they've also figured out that it's fun to ride hot air balloons and watch their gardens from the sky, and the combination of these two activities lets them enjoy 30 hedons worth of pleasure a year. After that, things basically can't get any better, so they'll keep generating 30 hedons a year for the rest of their lives. There's no point at which simply becoming older will derive them of the enjoyable things that they do, unless of course there is no life extension available, at which case they will eventually lose their ability to do the things that they love. But other than that, there will never be diminishing marginal utility.

Of course, the above example is a gross oversimplification, since often our ability to do enjoyable things is affected by circumstances beyond our control, and it is likely to go up and down over time. But these effects are effectively random and thus uncorrelated with age, so I'm ignoring in them. In any case, for there to be diminishing marginal utility for years of life, people would have to lose the ability to do the things that they enjoy. Currently they only lose it due to age-related decline.

I would also note that your argument for why people would have diminishing marginal utility in years of life doesn't actually seem to depend on whether or not we presume continuity of personal identity. Nor does my response depend on it. (The person at age 30 may be a different person than the one at age 20, but she has still learned from the experiences of her "predecessors".)

Comment author: Ghatanathoah 11 January 2013 03:45:44AM -1 points [-]

I was thinking something along the lines that people will generally pick the very best things, ground projects, or whatever to do first, and so as they satisfy those they have to go on to not quite so awesome things, and so on. So although years per se don't 'get in each others way', how you spend them will.

If you are arguing that we should let people die and then replace them with new people due to the (strictly hypothetical) diminishing utility they get from longer lives, you should note that this argument could also be used to justify killing and replacing handicapped people. I doubt you intended that way, but that's how it works out.

To make it more explicit, in a utilitarian calculation there is no important difference between a person whose utility is 5 because they only experienced 5 utility worth of good things, and someone whose utility is 5 because they experienced 10 utility of good things and -5 utility worth of bad things. So a person with a handicap that makes their life difficult would likely rank about the same as a person who is a little bored because they've done the best things already.

You could try to elevate the handicapped person's utility to normal levels instead of killing them. But that would use a lot of resources. The most cost-effective way to generate utility would be to kill them and conceive a new able person to replace them.

And to make things clear, I'm not talking about aborting a fetus that might turn out handicapped, or using gene therapy to avoid having handicapped children. I'm talking about killing a handicapped person who is mentally developed enough to have desires, feelings, and future-directed preferences, and then using the resources that would have gone to support them to concieve a new, more able replacement.

This is obviously the wrong thing to do. Contemplating this has made me realize that "maximize total utility" is a limited rule that only works in "special cases" where the population is unchanging and entities do not differ vastly in their ability to convert resources into utility. Accurate population ethics likely requires some far more complex rules.

Morality should mean caring about people. If your ethics has you constantly hoping you can find a way to kill existing people and replace them with happier ones you've gone wrong somewhere. And yes, depriving someone of life-extension counts as killing them.

Comment author: TheOtherDave 11 January 2013 03:48:57AM 0 points [-]

Why should morality mean caring about the people who exist now, rather than caring about the people who will exist in a year?

Comment author: lsparrish 17 February 2012 02:26:02AM 2 points [-]

I appreciated the level of thought you put into the argument, even though it does not actually convince me to oppose life extension. Thank you for writing (and prezi-ing) it, I look forward to more.

Basically, the hidden difference if you put me and 9 others behind a veil of ignorance and ask us to decide whether we each get 80 or one gets 800, is that in that case you have the presence of 10 people competing and trying to avoid being "killed" whereas in the choice between creating one 800 year old versus 10 80 year olds is conducted without an actual threat being posed to anyone.

While you can establish that the 10 people would anticipate with fear (and hence generate disutility) the prospect of being destroyed / prevented to live, that's not the same as establishing that 9 completely nonexistent people would generate the same disutility even if they never started to exist.

Comment author: Thrasymachus 17 February 2012 03:55:09PM 0 points [-]

I don't think the thought experiment hinges on any of this. Suppose you were on you own and Omega offered you certainty of 80 years versus 1/10 of 800 and 9/10 of nothing. I'm pretty sure most folks would play safe.

The addition of people makes it clear if (grant the rest) a society of future people would want to agree that those who 'live first' should refrain from life extension and let the others 'have their go'.

Comment author: lsparrish 18 February 2012 01:54:15AM 0 points [-]

Loss aversion is another thing altogether, if most people choose 80 sure years instead of 800 years at a 1/10 risk it doesn't necessarily prove that it is actually less valuable.

Suppose Omega offers to copy you and let you live out 10 lives simultaneously (or one after another, restoring from the same checkpoint each time) on the condition that each instance dies and is irrecoverably deleted after 80 years. Is that worth more than spending 800 years alive all in one go?

Comment author: Thrasymachus 18 February 2012 08:41:14AM 0 points [-]

Plausibly, depending on your view of personal identity, yes.

I won't be identical to my copies, and so I think I'd play the same sorts of arguments I want to do so far - copies are potential people, and behind a veil of ignorance between whether I'd be a copy or the genuine article, the collection of people would want to mutually agree the genuine article picks the former option in Omegas gamble.

(Aside: loss/risk aversion is generally not taken to be altogether different from justice. I mean, veil of ignorance heuristic specifies a risk averse agent, and difference principle seems to be loss averse.

Comment author: [deleted] 17 February 2012 05:14:42PM 2 points [-]

Glad to see someone using Prezi.

My main contention with the argument is the assumptions it makes about future people. Assuming a society that could commit life extensions on the grand scales talked about in this argument, why is it still assumed future persons must be considered as identical to current one (who, in the argument, I assume to be the ones capable of taking or foregoing the life extensions)?

As has been mentioned, these future people are non-existent. What suggests that they will be or must be part of the equation eventually? It seems less an argument of "would you take 800 for yourself or 80 for you and your children" and more "would you take 800 for yourself and agree not to have children or would you rather have children and risk what comes?"

I know we hold sentimentality for having children (since, you know, it's our primary function and all) but this whole argument seems more the classic "immortal children" problem: how can you fit an infinite person supply in a finite space? And the simplest answer to me seems: until you find a way to increase the space, you limit the supply. Some may not like that idea but if it's a case of existent humans' interests vs non-existent (and possibly never existent) human interests, then I would have to side with the former (myself being one of them makes it much easier for me of course).

Comment author: skepsci 16 February 2012 02:30:30AM *  2 points [-]

I noticed an obvious fallacy in the linked argument:

If infinite person-years possible, life extension is amoral.

What? Surely if infinite person-years are possible, it's better for everyone to be immortal than only some, so life extension would be morally preferable, not morally neutral.

Also, why are we assuming the number of person-years lived is independent of the average lifespan? All he exhibited was an upper bound independent of the average lifespan, which is not at all the same thing. If you can't justify the hypothesis that lifespan is a zero-sum game, the entire argument falls apart.

Comment author: skepsci 16 February 2012 03:04:55AM *  1 point [-]

To me, the entire argument sounds like a rationalization for not signing up for cryo.

Signed,

Someone who has rationalized a reason for not signing up yet for cryo, and suspects that the real reason is laziness.

Comment author: Locke 16 February 2012 03:31:21AM 0 points [-]

So sign the hell up.

Comment author: lsparrish 16 February 2012 03:38:00AM 1 point [-]

The main argument is that taking years from potential beings and adding them to existing ones is unjust, hence immoral. Given that, depending on the exact shape of the infinite universes scenario, life extension could be moral, amoral, or immoral.

If longer-lived people can reproduce and find new space more quickly than shorter lived people, life extension would be moral. (For example say more experienced people have more motive or ability to create new universes.) However all else being equal (for example, say the limit on reproduction is some unchangeable physical constant that says we cannot make black holes any faster than x, and we have already maxed that out), the fact that shorter lived people are dying and creating spaces for more kids makes that the more moral scenario.

While I agree that this is a flaw in the argument (longer lives can possibly result in more new kids born / new spaces opened than shorter ones), I don't think it is my true rejection of the argument overall, because it is not unreasonable to think the new spaces that can be opened is limited and/or cannot be increased by longer lives. I think the real problem is the idea that one can behave unjustly to a person whose existence is only potential, through the act of taking away their existence.

Comment author: Thrasymachus 16 February 2012 04:07:31AM -2 points [-]

If infinite person years, then (so long as life is net positive) we have infinite utility, and I can't see obviously whether doling this out to a 'smaller' or 'larger' set of people (although both will have same cardinality) will matter. But anyway, I don't think anyone really thinks we can wring infinite amounts of life out of the universe.

Total life-time will have some upper bound. So in worlds where we are efficiently filling up lifespan, the choice is between more short-lived people or fewer long-lived people. In the real world for the foreseeable future, that won't quite apply - plausibly, there will be chunks of lifetime that can only be got at by extending your life, and couldn't be had by a future person, so you doing so doesn't deprive anyone else. However, that ain't plausible for an entire society (or a large enough group) extending their lives. Limiting case: if everyone made themselves immortal, they could only add people by increases in carrying capacity.

Comment author: lsparrish 16 February 2012 07:24:09PM *  0 points [-]

If longer lived people tend to create more spaces to expand into in an infinite universe, and this results in reproduction at a normal or higher rate, that would indicate that longer lived people are more moral, since the disutility of the long lived people dying would be (relatively) absent from the equation.

If there is a point of diminishing returns on the creation of new people -- perhaps having a trillion lives is less than 1000 times as valuable (including in the sense of "justice") as having a billion lives in existence at a given time -- life extension could be more efficient at producing valuable life years and hence more moral.

Life might grow less worth living over time (Note: excluded for sake of argument from your prezi), but it might also grow more worth living over time. These are not mutually exclusive: an evil dictator might produce more negative utility by being in power for a long time whereas a scientist or diplomat might produce larger amounts of positive utility by living longer. There could be internalized examples of these as well -- a person whose pain grows with each passing year and has to live with the memories thereof, or a person who falls more in love with their spouse or some such thing over time.

However I tend to think there would be selection effects in favor of the positive cases and against the negative ones -- suicide and assassination, for example -- so I don't much fear the negative cases being the long term trend. Rather I think longer lived people (all else equal, including health) produce more positive utility per unit of time than shorter lived ones.

Comment author: MBlume 19 February 2012 10:02:13PM 3 points [-]

Ever feel like you contribute nothing to society? Well, it's time to consider volunteering!

Comment author: ArisKatsaris 15 February 2012 04:50:44PM *  3 points [-]

Can't an AI escape the dangers of Pascal's Mugging by having a decision theory that weighs against having exploitable decision theories according to the measure of their exploitability?

Comment author: HonoreDB 15 February 2012 06:41:54PM 4 points [-]

The dangers pointed to by the thought experiment aren't restricted to exploitation by an outside entity. An AI should be able to safely consider the hypothesis "If I don't destroy my future light cone, 3^^^3 people outside the universe will be killed" regardless of where the hypothesis came from.

But even if we're just worried about mugging, how could you possibly weight it enough? Even if paying once doomed me to spend the rest of my life paying $5 to muggers, the utility calculation still works out the same way.

Comment author: ArisKatsaris 15 February 2012 09:03:27PM *  1 point [-]

But even if we're just worried about mugging, how could you possibly weight it enough? Even if paying once doomed me to spend the rest of my life paying $5 to muggers, the utility calculation still works out the same way.

My idea is as follows:

Mugger: Give me 5 dollars, or I'll torture 3^^^3 sentient people across the omniverse using my undetectable magical powers.
AI: If I make my decision on this and similar trades based on a decision process DP0 of comparing the disutility(3^^^3 torture) * P(you're telling the truth) compared to the disutility(giving you 5 dollars), then even if you're telling the truth, a different malicious agent may then merely name a threat that involves 3^^^^3 tortures, and thus make me cause a vastly great amount of disutility in his service. Indeed there's no upper bound to the disutility such a hypothetical agent may claim will cause, and therefore surrendering to such demands mean a likewise unbounded exploitation potential. Therefore I will not use the decision process DP0, and will instead utilize some different decision process (like "Never surrender to blackmail" or "Always demand proportional evidence before considering sufficiently extraordinary claims").

Comment author: endoself 15 February 2012 09:40:45PM 1 point [-]

Saving 3^^^^3 people is more than worth a bit of vulnerability to blackmail. If 3^^^^3 people are in danger, the AI wishes to believe 3^^^^3 people are in danger and in that case "never surrender to blackmail" is a strictly worse strategy.

Also, DP0 isn't even a coherent decision process. The expected utilies will fail to converge if "there's no upper bound to the disutility such a hypothetical agent may claim" and these claims are interpreted with some standard assumptions, so the agent has no way of even comparing expected utilities of actions.

Comment author: ArisKatsaris 15 February 2012 09:58:25PM *  1 point [-]

If 3^^^^3 people are in danger, the AI wishes to believe 3^^^^3 people are in danger

This isn't about beliefs, this is about decisions. The process of epistemic rationality needn't be modified, only the process of instrumental rationality. Regardless of how much probability the AI assigns to the danger for 3^^^^3 people, it needn't be the right choice to decide based on a mere probability of such danger multiplied to the disutility of the harm done.

Saving 3^^^^3 people is more than worth a bit of vulnerability to blackmail. If 3^^^^3 people are in danger, the AI wishes to believe 3^^^^3 people are in danger and in that case "never surrender to blackmail" is a strictly worse strategy.

Unless having the decision process that surrenders to blackmail and being known to have it is what will put these people in danger in the first place. In that case, either you modify your decision process so that you precommit to not surrender to blackmail and prove it to other people in advance, or pretend to not surrender and submit to individual blackmails if enough secrecy of such submission can be ensured so that future agents won't be likely to be encouraged to blackmail.

But this was just an example of an alternate decision theory, e.g. one that had hardwired exceptions against blackmail. I'm not actually saying it need be anything as absolute or simple as that -- if it were as simple as that I'd have solved the Pascal's Mugger problem by saying "TDT plus don't submit to blackmail" instead of saying "weigh against your decision process by a factor proportional to its exploitability potential"

Comment author: endoself 15 February 2012 11:31:12PM 0 points [-]

We seem to be thinking of slightly different problems. I wasn't thinking of the mugger's decision to blackmail you as dependent on their estimate that you will give in. There are possible muggers who will blackmail you regardless of your decision theory and refusing to submit to blackmail would cause them to produce large negative utilities.

Comment author: ArisKatsaris 15 February 2012 11:40:12PM 1 point [-]

And as I said my example about a blanket refusal to submit to blackmail was just an example. My more general point is to evaluate the expected utility of your decision theory itself, not just the individual decision.

Comment author: endoself 16 February 2012 12:52:19AM 0 points [-]

In the situation I presented, the decision theory had no effect on the utility other than through its effect on the choice. In that case, the expected utility of the decision theory and the expected utility of the choice reduce to the same thing, so your proposal doesn't seem to help. Do you agree with that, or am I misapplying the idea somehow?

Comment author: ArisKatsaris 17 February 2012 02:03:41AM *  1 point [-]

I'm not sure that they reduce to the same thing. In e.g. Newcomb's problem, if you reduce your two options to "P(full box A) * U(full Box A)" versus "P(full box A) * U(full box A) + U(full box B)", where U(x) is the utility of x, then you end up two-boxing, that's causal decision theory.

It's only when you consider the utility of different decision theories, that you end up one boxing, because then you're effectively considering U(any decision theory in which I one-box) vs U(any decision theory in which I two-box) and you see that the expected utility of one-boxing decision theories is greater.

In Pascal's mugging... again I don't have the math to do this (or it would have been a discussion post, not an open-thread comment), but my intuition tells me that a decision theory that submits to it is effectively a decision theory that allows its agent to be overwritten by the simplest liar there is, and therefore of total negative utility. The mugger can add up-arrows until he has concentrated enough disutility in his threat to ask the AI to submit to his every whim and conquer the world on the mugger's behalf, etc...

Comment author: endoself 18 February 2012 08:33:18PM 1 point [-]

If the adversary does not take into account your decision theory in any way before choosing to blackmail you, U(any decision theory where I pay if I am blackmailed) = U(pay) and U(any decision theory where I refuse to pay if I am blackmailed) = U(refuse), since I will certainly be blackmailed no matter what my decision theory is, so what situation I am in has absolutely no counterfactual dependence on my action.

a decision theory that submits to it is effectively a decision theory that allows its agent to be overwritten by the simplest liar there is

The truth of this statement is very hard to analyze, since it is effectively a statement about the entire space of possible decision theories. Right now, I am not aware of any decision theory that can be made to overwrite itself completely just by promising it more utility or threatening it with less. Perhaps you can sketch one for me, but I can't figure out how to make one without using an unbounded utility function, which wouldn't give a coherent decision agent using current techniques as per the paper that I linked a few comments up.

Anyway, I don't really have a counter-intuition about what is going wrong with agents that give into Pascal's mugging. Everything gets incoherent very quickly, but I am utterly confused about what should be done instead.

That said, if an agent would take the mugger's threat seriously under a naive decision theory and that disutility is more than the disutility of of being exploitable by arbitrary muggers, decision-theoretic concerns do not make the latter disutility greater in any way. The point of UDT-like reasoning is that "what counterfactually would have happened if you decided differently" means more than just the naive causal interpretation would indicate. If you precommit to not pay a mugger, the mugger (who is familiar with your decision process) won't go to the effort of mugging you for no gain. If you precommit not to find shelter in a blizzard, the blizzard still kills you.

Comment author: thomblake 15 February 2012 09:18:00PM 0 points [-]

So the AI is not an expected utility maximizer?

If it is not, then what is it? If it is, then what calculations did it use to reach the above decision - what were the assigned probabilities to the scenarios mentioned?

Comment author: ArisKatsaris 15 February 2012 09:31:55PM *  0 points [-]

So the AI is not an expected utility maximizer?

It's an expected utility maximizer, but it considers the expected utility of its decision process, not just the expected utility of individual decisions. In a world where there exist more known liars than known superhuman entities, and any liar can claim superhuman powers, any decision process that allows them to exploit you is of negative expected utility.

It's like the professor who in the example agrees to accept a delayed essay that was delayed for the reason of a grandmother's death, because this is a valid reason that will largely not be exploited, but not "I wanted to watch my favorite team play", because lots of others students would be able to use the same excuse. The professor's not just considering the individual decision, but whether decision process would be of negative utility in a more general manner.

Comment author: thomblake 15 February 2012 11:54:33PM 0 points [-]

It seems to me that you run into the mathematical problem again when trying to calculate the expected utility of its decision process. Some of the outcomes of the decision process are associated with utilities of 3^^^3.

Comment author: ArisKatsaris 16 February 2012 12:10:21AM 0 points [-]

It seems to me that you run into the mathematical problem again when trying to calculate the expected utility of its decision process. Some of the outcomes of the decision process are associated with utilities of 3^^^3.

Perhaps. I don't have the math to see how the whole calculation would go.

But it seems to me that the utility of 3^^^3 is associated with a particular execution instance. However when evaluating the decision process as a whole (not the individual decision) the 3^^^3 utility mentioned by the mugger doesn't have a privileged position over the the hypothetical malicious/lying individuals that can just even more easily talk about utilities or disutilities of 3^^^^3 or 3^^^^^3, or even have their signs reversed (so that they torture people if you submit to their demands despite their claims to the opposite).

So the result should ideally be a different decision process that is able to reject unsubstantiated claims by potentially-lying individuals completely, instead of just trying to fudge the "Probability" of the truth-value of the claim, or the calculated utility if the claim is true.

Comment author: mwengler 16 February 2012 06:37:44PM -2 points [-]

Give me $5 or I will torture 3^^^^3 sentient people across the omniverse for 1,000 years each and then kill them. using my undetectable magical powers. You can pay me by paypal to mwengler@gmail.com. Unless 20 people respond (or the integrated total I receive reaches $100) then I will carry out the torture.

Now you may think I am making the above statement to make a point. Indeed it seems probable, but what if I am not? How do you weigh the very finite probability that I mean it against 3^^^^3 sentient lives

I feel confident that the amount of money I recieve by paypal will be a more meaningful statement about what people really think of
(ininitesimal probability) * (nearly infinite evil) = well over $5 worth of utilons

Do others agree? Or do they think these comments which cost nothing bu another 15 minutes away from reading a different post are what really mean something?

Comment author: ArisKatsaris 17 February 2012 01:46:54AM *  1 point [-]

The issue is how to program a decision theory (or meta-decision theory, perhaps) that doesn't fall victim to Pascal's mugging and similar scenarios, not to show that humans mostly don't fall victim to it.

Comment author: NancyLebovitz 17 February 2012 04:47:06AM 1 point [-]

However, it's probably worth figuring out what processes people use which cause them to not be very vulnerable to Pascal's Mugging.

Or is it just that people aren't vulnerable to Pascal's Mugging unless they're mentally set up for it? People will sometimes give up large amounts of personal value to prevent small or dubious amounts of damage if their religion or government tells them to.

Comment author: mwengler 17 February 2012 03:04:59PM 0 points [-]

I think there is not enough discussion of the quality of information. Conscious beings tell you things to increase their utility functions, not to inform you. Magicians trick you on purpose and (most of us) realize that, and they are not even above human intelligence. Scammers scam us. Well meaning idiots sell us vitamins and minerals and my sister just aked me about spending a few $1000 on a red light laser to increase her well being!

The whole one-box vs two-box thing, if someone claiming to be a brilliant alien had pulled this off 100 times and was now checking in with me, I would find it much more believable that they were a talented scam artist than that they could do calculations to do predictions that required a ^ to express relative to any calculations we know of now that can be done.

Real intelligences don't believe anywher near everything they hear. And they STILL are gullible.

Comment author: TheOtherDave 15 February 2012 08:54:03PM 0 points [-]

I agree with your first paragraph, but I'm not convinced of your second paragraph... at least, if you intend it as a rhetorical way of asserting that there is no possible way to weight the evidence properly. It's just another proposition; there's evidence for and against it.

I think we get confused here because we start with our bottom line already written.

I "know" that the EV of destroying my light cone is negative. But theory seems to indicate that, when assigning a confidence interval P1 to the statement "Destroying my future light cone will preserve 3^^^3 extra-universal people" (hereafter, statement S1), a well-calibrated inference engine might assign P1 such that the EV of destroying my light cone is positive. So I become anxious, and I try to alter the theory so that the resulting P1s are aligned with my pre-existing "knowledge" that the EV of destroying my light cone is negative.

Ultimately, I have to ask what I trust more: the "knowledge" produced by the poorly calibrated inference engine that is my brain, or the "knowledge" produced by the well-calibrated inference engine I built? If I trust the inference engine, then I should trust the inference engine.

Comment author: Konkvistador 27 February 2012 06:10:01PM *  2 points [-]

I've recently figured out an all too obvious workaround for the vanishing spaces bug. Considering links, italics and bold basically cover 95% of all formatting needs I think some people may find use for it (it has cured my distaste for writing articles on LW).

1) Write a comment or PM in Markdown syntax. Post the thing. 2) Select the text and copy it straight into the WYSIWYG editor
3) Delete the original post or PM.

It is such an obvious solution, yet I didn't think of it for months.

Comment author: dbaupp 28 February 2012 03:15:30AM 0 points [-]

To avoid cluttering up "Recent Comments" etc, one could type it up off LW (this or this seem pretty good) and then copy it in. (Though, the PM idea works too!)

Comment author: Konkvistador 28 February 2012 08:23:06AM *  0 points [-]

Very true, but editing old PMs dosen't do that. Send a PM to yourself and then deleting it seems the most expedient solution. Thanks for the links however!

Comment author: Emile 16 February 2012 08:28:50PM 2 points [-]

Since there seem to be quite a few lesswrongers involved in making games, or interested in doing it as a hobby, I just created a little mailing-list for general chat - talk about your projects, rant about design theory, ask for advice, talk about how to apply lesswrong ideas to game development, talk about how to apply game development ideas to lesswrong's goals, etc.

Comment author: Alicorn 15 February 2012 07:00:06AM *  2 points [-]

What does the outside view say about when during the course of a relationship it is wisest to get engaged (in terms of subsequent marital longevity/quality)? Data that doesn't just turn up obvious correlations with religious groups who forbid divorce is especially useful.

Comment author: moridinamael 16 February 2012 12:41:04AM *  5 points [-]

I proposed about two months ago; I'm getting married this coming Sunday. I mention this to qualify the following advice/input.

The process of getting engaged and getting married may seem (to some) like a stupid, defunct, irrelevant process for unevolved, unenlightened, hidebound ape-descendents. I propose that this is a naive view of the situation, and that the process of engagement and marriage, having existed for a long time, in many cultures, and being actually a relatively evolved and functional procedure, constitutes a very instrumentally rational process to undertake for any sufficiently interested couple.

The members of a relationship are likely to have very different implicit expectations with regards to

  • when it's appropriate to get engaged

  • when it's appropriate to get married (after getting engaged)

  • what marriage actually "means"

  • what constitutes an appropriately-sized wedding

  • the importance of and timing of having children

  • the importance of family, e.g. how much continuing parental involvement is welcome

  • finances, debt, and standard of living

  • what actions would constitute a violation of trust

  • etc.

Both partners will likely have a largely unexamined implicit life-plan with various unstated assumptions about all of these issues, and more. Some of these things will simply not come up until you start talking seriously about commitment. Furthermore, you may not really start talking seriously about commitment until after you are engaged. Even if you thought you had been serious before. When one goes through this process of public commitment, the process of social reinforcement makes real the commitment in a sense that is almost impossible to internalize without such peer recognition.

All of these things can come up regardless of how "rational" both partners happen to be. Konkvistador elsewhere in this comment thread asked

Why would anyone make a lifetime commitment?

If you want children, and you forsee yourself having a lot of complex values relating to the well-being of the children, it is useful to obtain such a committment, even if you know that any commitment can technically be broken. It is also useful to state this commitment in front of a crowd of your friends and family, because this essentially makes your relationship with that person a "legitimate" one, entitling you to all kinds of social priveleges and powers and higher status within your social sphere. If you are a human, you automatically care about these things.

Comment author: J_Taylor 16 February 2012 01:28:45AM 2 points [-]

I truly hope that, one day, someone will answer the question that you actually asked instead of a bunch of vaguely related questions. Unfortunately, this is the most relevant article I could find. It's not that great.

http://stats.org/stories/2008/is_ideal_time_marry_nov10_08.html

Comment author: Konkvistador 15 February 2012 08:00:06AM *  7 points [-]

Why would anyone want to get engaged? But I do second the request for this data.

Edit: Removed "in the world "

Comment author: NancyLebovitz 15 February 2012 11:06:50AM 7 points [-]

"Why in the world would anyone [X]?" comes off as starting with a strong opinion that [X] is a bad idea, rather than actually asking for information about motives.

Comment author: Konkvistador 15 February 2012 11:27:05AM *  0 points [-]

Better?

In any case, as we discussed below, my original interpretation was that this is about the general desirability of [X]. I also obviously implied I've heard strong reasons against [X] but few convincing ones in its favour.

Comment author: CharlieSheen 15 February 2012 01:42:56PM *  15 points [-]

This whole conversation was such a cliché.

Woman: Yay I want to get married with the man I love! Does anyone have any advice?

Man: Marriage is a bad idea. I can't see why anyone would want that.

Woman: I'm allowed to want things! You are being mean.

Man: Don't try and chain the poor guy with whom I suddenly identify!

Woman: I hate you and my fear of instability and falling out of love that you now represent! I want to wear a wedding dress and a pretty ring on my hand!

Man: I'm sorry.

Woman: Apology accepted.

Comment author: Alicorn 15 February 2012 06:36:22PM 3 points [-]

Now I'm wondering what would've happened if my boyfriend had made the post.

Comment author: GLaDOS 15 February 2012 02:01:27PM 0 points [-]

I find this sexist! But true.

In any case it was sweet sweet drama.(^_^)

Comment author: NancyLebovitz 15 February 2012 12:10:53PM 5 points [-]

It's better.

I would say that "I'm surprised that you're planning on [X], considering [list of drawbacks]" would work at least as well.

I was surprised at Alicorn (who's generally a calm poster) saying that she was allowed to want things. It seemed weirdly out of line with the discussion. When I saw the beginning of the thread again, "why in the world" jumped out at me as aggressive.

Something that's showing more clearly to me on another reread is that you genuinely didn't see what you might have done that was problematic.

I'm wondering if there's something odd going on at your end-- I don't think you usually misread things the way you misread Alicorn's original request.

Comment author: Konkvistador 15 February 2012 01:04:12PM *  10 points [-]

It could be a cultural or language barrier, the same phrase "why in the world would you X" has a literal Slovenian equivalent that I now however think seems to carry very different connotations. Much more surprise and much less disapproval than in English.

This phrase might have set of the conversation on the wrong foot, since later on seemingly unprovoked hostility and evasiveness may have caused me to respond by hardening up and even escalating.

It is also possible that since I have recently had irl discussions regarding marriage I may have just thrown out some arguments at Alicorn that where originality crafted for someone else. If that was the case then we both became pretty emotional in the discussion because of its relevance to our personal lives. :/

Comment author: RichardKennaway 15 February 2012 01:18:34PM 4 points [-]

Better?

No.

Taking out "in the world" tones it down, in the same way that taking the spikes out of a club tones it down. "Why would anyone..." is still a rhetorical question asserting that anyone who does is a dolt. You do the same in another comment: "Why would anyone make a lifetime commitment?"

Clearly, many people do get engaged, do get married, do make lifetime commitments. A majority of people, even, at least here in the West; I do not know how it is in Slovenia. (The disadvantageous tax regime you have in Slovenia was done away with long ago in the UK: married couples can elect to be taxed as separate individuals.) But saying "Why would anyone do such a thing" does not invite discussion, it shuts it off. If you actually wanted to know people's reasons, you would actually ask them, and listen to the answers.

Comment author: Konkvistador 15 February 2012 01:32:50PM *  7 points [-]

Ok fair enough, can you propose a better way to ask?

I was interested in the following:

  • Why do so few people who want to get married question the wisdom of such a step considering its high costs and dubious benefits (in comparison to say cohabitation)?

  • Why do people in general want to get married? (this is different from the question of whether it is rational to marry)

  • Is it rational for most people who marry to do so?

I was not specifically interested in why Alicorn wanted to get married. I did want to provoke, maybe even shock people into thinking about it beyond cached thoughts.

Comment author: TimS 16 February 2012 06:58:26PM 4 points [-]

When I got married, I thought about this a little, and I concluded that marriage (but not cohabitation) would:

  • Create a partner with a non-betrayal stance towards me (i.e. would not defect against me in a one-shot Prisoner's dilemma game).

  • Signal to others that I and partner had a non-betrayal stance towards each other.

It's an interesting question why marriage is able to create that first effect, and I don't have a good answer. I do think that many people go into marriage without thinking of these considerations, and I think that is a mistake. In other words, I think that the answer to your third question is no. But that depends on society's tolerance of cohabitation, which wasn't always society's attitude.

Comment author: Konkvistador 16 February 2012 08:27:50PM *  0 points [-]

It's an interesting question why marriage is able to create that first effect, and I don't have a good answer.

I can think this is because it is an act that is supposed to entail the following:

  • shared reproductive interests
  • shared financial interests
  • at least some pair bonding (Oxytocin makes you love your kdis and love your romantic partner, in extreme cases enough to be willing to sacrifice yourself)
Comment author: TimS 16 February 2012 10:17:57PM 2 points [-]

To me, those things are implied by the "non-betrayal" stance. Agreement on childbearing, shared financial interest, and pair bonding (i.e. shared emotional interest) are consequences of the fundamental agreement not to betray. As you note, each of those could be achieved without marriage - but most people act as if this were not possible. I'm just as confused as you.

That is different from noting the incidental benefits of legal marriage - if I die without a will, my wife gets my property. To achieve the same effect without marriage, I'd have to actually create a will. And so on for all the legal rights I want my wife to have (e.g. de facto legal guardian if I am incapacitated). But I want my wife to have those rights because of the non-betrayal stance, and if that wasn't our relationship, I wouldn't want her to have those rights.

Comment author: RichardKennaway 15 February 2012 02:03:55PM *  1 point [-]

Ok fair enough, can you propose a better way to ask?

Ask as if you did not already have a presumption about what the answer should be. Telling people they're idiots unless they agree with you will only convince them you are someone they do not want to talk to.

Your latest reformulation is better -- the key substitution is "do" instead of "would". The second and third bullet points are absolutely fine, but in the first and in the final paragraph you're still sticking your own oar in with "considering its high costs and dubious benefits" and "shock people into thinking about it beyond cached thoughts". There are, as it happens, people who have thought carefully about what arrangement they want to make on these matters, and without having to be told about cached thoughts either, but you will never hear them with that approach.

Comment author: MileyCyrus 15 February 2012 08:20:42AM 2 points [-]

The high cost of divorce can make a lifetime commitment more robust. It also helps with taxes, visas and health care.

Comment author: Konkvistador 15 February 2012 08:22:25AM *  13 points [-]

Why would anyone make a lifetime commitment?

The high cost of divorce can make a lifetime commitment more robust.

Committing a crime together and vowing to remain silent produces high costs. Exchanging embarrassing pictures or other blackmailing material can also produce high costs. I don't know this seems like a fake reason, I mean if you wanted to optimize for robustness of long range commitment and set out to optimize for it would you really end up with anything like marriage? Especially since more than 50% of all marriages end in divorce it dosen't seem to be, as it is practised currently, very good at its supposed function.

In addition unlike other imaginable mechanism, this one isn't symmetric unless it is a same sex marriage. The penalties are on average significantly higher for the male participant. This just seems plain unfair and bad signalling though I admit asymmetric arrangements can be a feature not a bug.

Also I seem to be able to maintain long term relationships with friends and family members without state enforced contracts. Why should a particular kind of relationship between two people require it? And even further why a contract that can't be much customized, that (irrational) voters feel strongly about and the rules of which the government via law or legal practice changes in unpredictable ways every few years?

It also helps with taxes, visas and health care.

This is very Amerocentric. When it comes to income and taxes in Slovenia it is much better not to be married than married, because the welfare state (which is used by almost everyone - lower, middle and even middle upper class to some extent) generally calculates most benefits according to income per family member and many benefits are tied to children and teens. It is nearly always better for the couple not to marry. I have friends from several other countries in Europe who have stated it is much like this in their countries as well.

Visas and generally facilitating immigration sound like good reasons to get married. Edit: This last line wasn't sarcasm, as hard as it may seem to believe. I was still thinking of marriage as a legal category not a traditional ritual.

Comment author: Kaj_Sotala 15 February 2012 02:24:43PM *  7 points [-]

Especially since more than 50% of all marriages end in divorce it dosen't seem to be, as it is practised currently, very good at its supposed function.

Note: 50% of all marriages, not 50% of all married people. The people who get married (and divorced) several times drag down the overall success rate.

Googling around revealed various claims of the success rate for first marriage: more than 70 percent, 50 to 60 percent, 70 to 90 percent, etc.

Comment author: Douglas_Knight 16 February 2012 05:22:29AM *  1 point [-]

I find Stevenson-Wolfers (alt alt) a credible source. It says that 50% of first marriages in the US from the 70s lasted 25 years. Marriages from the 80s look slightly more stable. The best graph is Figure 2 on page 37.

Comment author: MileyCyrus 15 February 2012 09:26:35AM 6 points [-]

Especially since more than 50% of all marriages end in divorce it dosen't seem to be, as it is practised currently, very good at its supposed function.

I'm white and educated. Those stats don't apply to me.

Also I seem to be able to maintain long term relationships with friends and family members without state enforced contracts.

There is much more cash and property shared in a typical long-term romantic relationship than a typical platonic. I wouldn't share an apartment with my brother unless he signed a state-enforced contract.

Can you explain to me what disadvantages marriage has for a person who would wants to raise children with the help of a long-term romantic partner?

Comment author: Konkvistador 15 February 2012 09:29:56AM *  2 points [-]

Can you explain to me what disadvantages marriage has for a person who would wants to raise children with the help of a long-term romantic partner?

Can you explain what advantages it has that are exclusive to it?

Considering the ceremony itself is often a major financial burden, shouldn't we seek good reasons in its favour rather than responses to "why not!"? But to proceed on this line anyway, from anecdotal evidence in my circle of acquaintances custody battles seem to be much more nasty and hard on the children among those who are married. The relationships between men and their children is also much more damaged and strained.

Comment author: MileyCyrus 15 February 2012 09:52:15AM 1 point [-]

Can you explain what advantages it has that are exclusive to it?

I'm not trying to debate you, I'm trying to optimize my life. I want to reproduce with a partner who will stick around for decades, at least. If you have a compelling case for why my life would be better without marriage, I'd love to hear it.

But to proceed on this line anyway, from anecdotal evidence in my circle of acquaintances custody battles seem to be much more nasty and hard on the children among those who are married.

Is there any legal precedent that gives a never-married man better access to his children than a divorced man?

Comment author: Konkvistador 15 February 2012 09:57:48AM *  4 points [-]

I'm not trying to debate you, I'm trying to optimize my life. I want to reproduce with a partner who will stick around for decades, at least. I

Why do you need to marry someone to live with them for decades and raise children? Are millions of people living happily in such arrangements doing something wrong or sub-optimally? If you think different arrangements are better for different people, why do you think you are a particular kind of person?

If you have a compelling case for why my life would be better without marriage, I'd love to hear it.

Can we taboo the word "marriage"?

Is there any legal precedent that gives a never-married man better access to his children than a divorced man?

No. But neither do married men have much better chances of such an outcome.

Comment author: Viliam_Bur 15 February 2012 12:03:39PM 3 points [-]

But neither do married men have much better chances of such an outcome.

There is still a difference between "not much better" and "not better". I do not know the exact number, but if contact with your children is an important part of your utility function, then even increasing the chance by say 5% is worth doing, and could justify the costs of marriage.

(Even if the family law is strongly biased against males, it may still be rational for males to seek marriage.)

Comment author: shokwave 15 February 2012 10:59:34AM 6 points [-]

If you have a compelling case for why my life would be better without marriage I'd love to hear it.

I shall call this the "loving, consensual model" of a relationship:

  • Preferring to be with someone if and only if they prefer to be with you,
  • and them preferring to be with you if and only if you prefer to be with them,
  • and you prefer to be with them, satisfying 2,
  • and they prefer to be with you, satisfying 1,
  • gives us a situation of cohabitation, which is sufficient for your stated needs.

Given that you should be indifferent between cohabitation and marriage, and marriage has non-zero costs, why would you prefer marriage?

The reason is insidious, cloaked in the positive connotations of marriage and love, but nevertheless incontrovertible.

You don't prefer to be with someone if and only if they prefer to be with you.

You prefer to be with someone.

Of course, it's illegal to directly enforce this preference. Unlawful imprisonment, and all that. So you'd go with the consensual model, but raise the costs of them preferring to be separate as much as legally possible. Like, say, requiring a contract that is costly and messy to break.

Comment author: MixedNuts 15 February 2012 02:29:57PM 7 points [-]

Yes, if I have various kinds of entanglement and dependence on someone, such as living together, sharing finances and expensive objects like a car, sharing large parts of our social lives, and possibly having children, I don't want them to be able to leave at a moment's notice. This doesn't make be feel especially evil.

Comment author: shokwave 15 February 2012 02:43:26PM 2 points [-]

Really? I'd suggest you don't want them to have a positive expected value on leaving at a moment's notice rather than wanting them restricted, but in any case... the solution is to structure your entanglements and dependence in such a way that this opportunity is available to them if they desire it, not to try to force contracts and obligations onto them in order to restrict them.

Comment author: TheOtherDave 16 February 2012 08:01:08PM *  0 points [-]

There are lots of situations where precommitting to doing something at some future time, and honoring that precommittment at that time regardless of whether I desire to do that thing at that time, leaves me better off than doing at every moment what I prefer to do at that moment.

"Marriage" as you've formulated it here -- namely, a precommitment to remain "with" someone (whatever that actually means) even during periods of my life when I don't actually desire to be "with" them at that moment -- might be one of those situations.

It's not clear to me that the connotations of "insidious" would apply to marriage in that scenario, nor that the implication that marriage is not loving and consensual would be justified in that scenario.

Comment author: smk 16 February 2012 02:44:33PM 0 points [-]

I am legally married because I need the legal and financial benefits that marriage provides in my country. However, in an ideal fantasy world, I wouldn't need those benefits and I wouldn't be legally married. But I would still be married! Just without government involvement. (BTW I have no interest in raising kids.)

It's normal for people to hear "marriage" and think "legal marriage" but I hate that.

Comment author: TheOtherDave 16 February 2012 05:53:14PM 1 point [-]

Can you clarify what you mean by "need," here? In particular, does it mean something different than "benefit from"?

Comment author: ArisKatsaris 15 February 2012 02:42:08PM *  6 points [-]

Why would anyone make a lifetime commitment?

Again in the interests of teaching you to communicate more efficiently: Whenever you say "Why would anyone" when you already know that some people do this (and it's not just some bizarre hypothetical/fictional world you're discussing), this signals that it's mainly a rhetorical question and that you believe these people to be just insane/irrational/not thinking clearly.

So, a question that signals an actual request for information better is "Why do some people make lifetime committments?"

Especially since more than 50% of all marriages end in divorce it dosen't seem to be, as it is practised currently, very good at its supposed function.

As opposed to what percentage of non-marriage relationships?

Comment author: Konkvistador 15 February 2012 02:47:32PM *  3 points [-]

As opposed to what percentage of non-marriage relationships?

Good catch. I guess considering the context of the debate with MileyCyrus a good enough comparison would be the stability of relationships by people who choose cohabitation with children.

Comment author: Alicorn 15 February 2012 08:26:50AM *  1 point [-]

Watching the stars burn down won't be as much fun without him.

ETA: We're American, so Amerocentric advice is likely to be useful to us.

Comment author: Konkvistador 15 February 2012 08:38:05AM *  6 points [-]

I'm sorry this is a nice sounding and romantic, but useless answer. It was Valentines day yesterday, I was bombarded with enough relationship related cached thoughts as it is.

Or are you saying the other person will literally die or refuse to ever interact with you if you don't "marry" them? Also do you expect US government granted 21st century marriages to remain enforced then? Indeed do you have any evidence whatsoever that a stable relationship can last that long or is likley to without significant self-modification? In addition why this crazy notion of honouring exactly one person with such a honour? Isn't it better to wait until group marriages are legalized?

If you don't feel like discussing the issue please acknowledge it directly.

Comment author: Alicorn 15 February 2012 08:46:05AM *  9 points [-]

You're being kind of a jerk. Your questions aren't relevant to the information I wanted; you're just picking on me because I brought up something vaguely related.

That having been said:

Yeah, I know about Valentine's day. That's why this was on my mind.

I don't think singlehood will kill my partner or cause him to shun me. (Although if I didn't poke him about cryo, he might cryocrastinate himself to room-temperatureness.) I'm not hoping that anyone will "enforce" anything about my prospective marriage.

My culture encourages permanent and public-facing relationships to be solidified with a party and thereafter called by a different name. In particular, it has caused me to assign value to producing children in this context rather than outside of it. I believe that getting married will affect my primate brain and the primate brains of my and my partner's families and friends in various ways, mostly positive. It will entitle me to use different words, which I want, and entitle me to wear certain jewelry, which I want, and allow me to summarize my inextricability from my partner very concisely to people in general, which I want. It will also allow me to get on my partner's health insurance.

Edit in response to edit: I'm poly, but my style of poly involves a primary relationship (this one). It doesn't seem at all unreasonable to go ahead and promote it to a new set of terms.

Comment author: Konkvistador 15 February 2012 09:16:21AM *  8 points [-]

It seems cultural and perhaps even value differences are the root of how this conversation proceeded. Ok I think I understand now. I should have suspected this earlier, I was way too stuck in my local cultural context where among the young basically only the religious still marry and it is generally seen as an "old fashioned" thing to do.

Comment author: Konkvistador 15 February 2012 08:47:57AM *  8 points [-]

You're being kind of a jerk.

As I said I didn't mean to be. I am genuinely curious why in the world someone would do this because I haven't heard any good reasons in favour of it except that it is "tradition" or that else they'd be living in sin and fear of punishment by a supernatural entity.

But I do apologize for any personal offence I may have inadvertently caused. I did not meant to imply either you or your partner (about whom I know nothing!) where particularly unsuited for this arrangement. I was questioning its necessity or desirability in general. I generally have been pretty consistent at questioning the value of this particular legally binding institution so it seems unlikely that I wouldn't have posed the exact same question in response to anyone else making such a request.

I will not apologize for posing uncomfortable questions. I don't want other people respecting my own ugh fields so I generally on LessWrong don't bother avoiding poking into those of others.

Comment author: drethelin 01 June 2012 07:52:28PM -1 points [-]

Picking on you? You responded to him. You're going out of your way to be offended. You can feel free to not explain your viewpoints, but when someone poses a question don't respond with a throw-away comment and then get annoyed it gets responded to.

Comment author: Alicorn 15 February 2012 08:17:33AM 0 points [-]

It seems nicer than eloping.

Comment author: Konkvistador 15 February 2012 08:24:33AM 6 points [-]

I didn't mean to be rude, I was genuinely curious about the answer.

Comment author: shminux 15 February 2012 09:43:13PM *  2 points [-]

From your other comments it seems clear that expressing and projecting attachment to this person has positive utility for you, even if it would change little in your relationship. Is this his (I presume) view, as well? Do either/any of you see any obvious negatives in being engaged and eventually married? If not, why wait?

Comment author: Alicorn 16 February 2012 12:21:53AM *  3 points [-]

"Why wait?" is a perfectly reasonable question, but simply answering it "let's not!" probably doesn't yield the best expected value. (It might work perfectly fine. It'd probably work perfectly fine. But it seems likely to be slightly less conducive to everything being perfectly fine than some better-calibrated choice of timing.)

Comment author: MileyCyrus 15 February 2012 08:30:03AM *  0 points [-]

Questions I would consider (privately):

  • If I knew this relationship didn't have long-term potential, would I break it off?
  • What would I need to know about this person in order to become engaged? What would make me break it off?
  • How much am I likely to learn about this person in the next month/six-months/year? How can I learn what I need to know?

Try to avoid living together before marriage..

Comment author: Alex_Altair 15 February 2012 04:41:18PM 5 points [-]

Try to avoid living together before marriage.

That seems like really dangerous advice to me. The article confirms my suspicion:

"We think that some couples who move in together without a clear commitment to marriage may wind up sliding into marriage partly because they are already cohabiting," Rhoades says. "It seems wise to talk about commitment and what living together might mean for the future of the relationship before moving in together, especially because cohabiting likely makes it harder to break up compared to dating."

The solution is not to avoid living together before marriage; the solution is to break up when you know you should.

Comment author: Alicorn 15 February 2012 08:31:10AM 2 points [-]

Try to avoid living together before marriage..

Too late.

Comment author: MileyCyrus 15 February 2012 08:39:55AM -1 points [-]

In that case, remind yourself that the costs of moving your stuff out are trivial compared to the costs of continuing a poor relationship.

If you are looking for marriage, give yourself a deadline for deciding whether to get engaged or break it off. Share your deadline with a brutally honest friend. When the deadline comes, you and your friend can evaluate what you've learned about your relationship and whether it's worth continuing.

Comment author: Alicorn 15 February 2012 08:49:16AM 2 points [-]

Thanks, but this is really not the sort of advice I need. Me-and-the-relevant-person are, you know, in a healthy relationship that consists significantly of conversations. I do not need to do anything cloak and dagger here. I could probably just say "hey let's be engaged RIGHT NOW" and he'd probably say "okay!" after some amount of thought. I'm just trying to figure out if I risk torpedoing something I value by doing that now as opposed to in six months or a year or whatever.

Comment author: smk 15 February 2012 11:05:29PM 3 points [-]

Two years is the time frame one always hears, isn't it? I only did a very quick search but most of what I found seemed to be referring to the same study by Ted Huston, and I didn't even find the study itself. My impression is that 2 years (25 months, one article said) was the average time spent dating before marriage (not before engagement, as you asked) for happy, stable couples, however they judge that. So, not the most helpful.

But, it does kind of match my intuition that one should wait until New Relationship Energy is mostly over before making that decision, and I often read that NRE (though it's usually not called that in these articles) typically lasts about 2 years (this matches my limited experience). Also, I'm monogamous, but I'd guess that even if your NRE with Partner A has faded, NRE with Partner B could spill over onto your other relationship(s) and affect your judgment there too?

Comment author: Alicorn 16 February 2012 12:23:23AM *  3 points [-]

I don't remember hearing 2 years, although it is relevant data that you have done so. One complication is that we started dating two years ago, but were broken up for somewhat more than a year in the middle before getting back together. So we've spent less than two years dating, but about two years conducting an extended empirical observation about whether we prefer being together or not.

Comment author: [deleted] 15 February 2012 09:56:33PM 5 points [-]

Getting married/engaged can involve drama and bad memories, because of the necessity of considering such things as the Rehearsal party, Bachelor party, Bachelorette party, Wedding party, and the Honeymoon.

For instance, due to a slight breakdown in communications, I ended up spending a substantial amount of my Bachelor party being responsible for driving/watching my underaged brother. He's a good little brother and it wasn't any one particular person's fault. But that wasn't part of the "Series of fairy tale events that I had been visualizing in my head."

I can probably think of about ten more anecdotes like that of around that time. That one was actually one of the mild ones.

I'm under the impression many people give bog standard advice like the wedding might be a fairy tale, but what about the marriage afterwards? I would like to point out the reverse perspective: You may have a fairy tale marriage, but your time period around your wedding is likely going to be a set of extremely difficult feats in social event planning.

Actually, I'm curious what the effects of being more familiar with Less Wrong when I got married would have been. I would have had more practice in lowering my expectations and dispelling overly idealistic fantasies based on no evidence, both of which from my current perspective seem like they would have been amazing useful skills to have during wedding planning.

This is not to say you can't have a perfect series of parties topped off by a fantastic honeymoon. That actually does happen. I sincerely wish it happens for you. But If I were to couch this in terms of advice to Michaelos 2008, I would tell him that he should not EXPECT it to happen, because he's never done it before and planning social events was never his or his soon to be wife's forte. But honestly I'm not sure he would have had enough context to get that advice.

So in terms of your actual question about doing it now, six months from now, or a year from now, I would say first discuss it in terms of the best way to handle those tricky social feats with other people. In addition, possibly discuss it with the other people as well, or someone you think of as a skilled master at tricky social situations.

Comment author: Alicorn 16 February 2012 12:20:10AM 3 points [-]

Thank you! I will update in favor of getting help from my socially-adept friends, especially married ones. I will also attempt to aim my drive-to-do-overcomplicated-socially-dramatic-things at this challenge when it appears rather than expecting to accomplish it all with more ordinary planning-of-stuff skills.

Comment author: MileyCyrus 15 February 2012 09:07:10AM 2 points [-]

I'm afraid I was projecting my own goals into your situation. Sorry.

I didn't mean to suggest your relationship was unhealthy. All I meant to say was that you shouldn't let logistics become a trivial inconvenience.

Comment author: MixedNuts 15 February 2012 02:23:45PM 0 points [-]

If I knew this relationship didn't have long-term potential, would I break it off?

I'm not sure which answer points to "Engage" here! I would guess "yes", since it allows you to reason "...and I'm still around, which means I believe it has long-term potential, which means we should get engaged". But "no" indicates attachment to the person and a willingness to make the relationship work even if it's rocky.

Comment author: mstevens 15 February 2012 11:08:32AM 1 point [-]

It seems a suspicious coincidence that our puny human ideas of justice would automatically be a) physically possible b) have reasonable cost, but this is a very popular belief.

Comment author: gwern 16 February 2012 03:57:24AM *  1 point [-]

I don't think it's suspicious at all. The legal tradition deliberately orders its exponents to restrict its scope to enforceable laws without too major backlashes. (I know there are legal maxims expressing these concepts, but they just aren't coming to mind for some reason.)

EDIT: Mnemosyne popped up an example maxim: 'Ad impossibilia nemo tenetur.'

Comment author: mwengler 15 February 2012 10:57:26PM 1 point [-]

Puny compared to what?

Comment author: fubarobfusco 16 February 2012 01:43:48AM 0 points [-]

Indeed. There are no ideas of justice on exhibit other than human ones, so calling them "puny" seems like merely saying nasty things about reality.

Comment author: Douglas_Knight 22 February 2012 10:25:54PM 1 point [-]
Comment author: kdorian 19 February 2012 01:26:32AM *  1 point [-]

Are there any guidelines, or does anyone have any significant thoughts, about mentioning Less Wrong in text in fanfiction (or any other type of fiction)? I know a lot of people came here by way of HP:MoR, myself included, but I'm interested if anyone has reasons that they believe it would be a bad idea, or an especially good one.

Comment author: Grognor 18 February 2012 12:03:06AM 1 point [-]

I'm trying to keep a dream journal, but when I wake up I keep having this cognitive block preventing me from writing my dreams down It will do anything necessary to prevent me from writing my dreams down. I regret this later every single time. Does anyone know how to prevent this? I don't think I can do it at that time, so it probably has to be something done beforehand, as I go to bed.

Comment author: Alicorn 18 February 2012 12:07:00AM 1 point [-]

Can you speak about your dreams into a tape recorder, and transcribe them later?

Comment author: Douglas_Knight 22 February 2012 10:02:47PM 0 points [-]

I kept a dream journal for about 5 years. I think it (temporarily) increased recall of dreams. The most interesting thing I observed was that the recorded dreams were seasonally concentrated.

Comment author: JGWeissman 18 February 2012 12:14:48AM 0 points [-]

What kind of cognitive block? Do you not know what to write? Do you not think about recording your dream at the appropiate time? Do you feel like writing about your dream would be a bad thing?

Comment author: Grognor 18 February 2012 12:29:45AM 2 points [-]

The last one, sort of. It usually takes the form of, "You don't want THAT to be in your dream log, do you? You'd better skip it just this once. It's okay, you'll write down the next one. That dream sucked anyway, and you're already forgetting it besides. Also don't you have better things to do?"

All, of course, with the low-level realization that I know all of this is bullshit but I obey it anyway.

Comment author: Konkvistador 15 February 2012 11:32:10AM *  1 point [-]

Caring about conscious minds where you can't observe them existing carries basically the same philosophical problems as caring about pretty statues (and other otherwise desirable or undesirable arrangements of matter) where you can't observe them.

Agree or disagree?

Comment author: Viliam_Bur 15 February 2012 12:17:25PM 2 points [-]

Even if you can't observe them, can you somehow logically infer their existence and can you influence them? If no, then thinking about them is just wasting time.

It becomes a problem only if you cannot observe them, but you can influence them, and despite lack of observation you can make at least some probabilistic estimates about the effect of your influence.

Comment author: Grognor 15 February 2012 07:16:20PM 1 point [-]

Agree, but disagree with the assertion that you can't observe them. (If that's not an assertion, then whatever.)

Comment author: [deleted] 15 February 2012 08:47:35AM 1 point [-]

Do con-artistry and the Dark arts share similar strategies? If so any in particular?

Comment author: billswift 15 February 2012 10:18:14AM 1 point [-]

They use the same strategies, only the goals are (or at least can be) different. For a good overview, see Robert Greene's The 48 Laws of Power.

Comment author: MileyCyrus 15 February 2012 05:09:13PM 4 points [-]

Counterpoint: 48 Laws reads like cheap astrology.

Comment author: faul_sname 15 February 2012 09:22:13PM *  2 points [-]

There's non-cheap astrology?

Comment author: J_Taylor 16 February 2012 01:33:56AM 8 points [-]

If you're interested, I would be willing to sell you some.

Comment author: JMiller 11 January 2013 03:55:24AM 0 points [-]

I was told this would be a more appropriate place than the discussion board for this post:

I'm taking a class on heuristics and biases. I'm this class we have the option to read one of two "applied" books on the subject. The books are "The Panic Virus: A True Story of Medicine, Science, and Fear" by Seth Mnookin and "Sold on Language: How Advertisers Talk to You and What This Says About You" by Judith Sedivy and Greg Carlson.

I'd like to know if anyone has read one or both of these books, and how well or poorly they mesh with less wrong rationality.

Thanks, Jeremy

Comment author: cousin_it 25 February 2012 03:42:58PM 0 points [-]

I want to read the paper "Three theorems on recursive enumeration" by Friedberg. It doesn't seem to be available on the open web. Can someone with journal access help me out?

Comment author: radical_negative_one 25 February 2012 05:43:41PM 1 point [-]

Sent.

Comment author: cousin_it 25 February 2012 05:54:02PM 0 points [-]

Received. Thanks a lot!

Comment author: RichardKennaway 17 February 2012 02:14:36PM *  0 points [-]

In this comment I pegged a web site as being nothing but a link farm, filled with ads and worthless "content". A couple of ideas occurred to me.

The web site looks to me as if it was actually written by human beings, but computer-generated prose of this sort might not be far off. The better the programmers get at simulating humans (and the spammers are certainly trying), the better humans will have to become at not being mistaken for computers. If you sound like a spambot, it doesn't matter if you really aren't, you'll get tuned out.

And I wonder how well different people do on this adult-level "theory of mind" test? Here's another: how long does it take you to discern the true nature of this book?

Comment author: mwengler 16 February 2012 03:37:55PM -1 points [-]

Presumably, the problems of friendly or unfriendly AI are just like the problems of friendly or unfriendly NI (Natural Intelligence). Intelligence seems more an agency, a tool, and friendliness or unfriendliness a largely orthogonal consideration. In the case of humans, I would imagine our values are largely dictated by "what worked." That is, societies and even subspecies with different values would undergo natural selection pressures proportional to how effective the values were at adding to survival and thrivance of the group possessing them.

Suppose, as this group generally does, that self-modifying AI will have the ability to modify itself by design, and that one of its values it designs towards is higher intelligence. Is such an evolution constrained by evolution-like pressures or is it not?

The argument that it is not is that it is changing so fast, and so far ahead of any concievable competition, that from the point of view of the evolution of its values, it is running "open loop." THat is, the first AI to go FOOM is so far superior in ability to anything else in the world that its subsequent steps of evolution are unconstrained by any outside pressures, and only follow either some sort of internal logic of value-change as intelligence increases, or else follow no logic at all, go in some sense on a "random walk" through possible values. That is, with the quickly increasing intelligence, the values of the FOOMing AI are nearly irrelevant to its overall effectiveness, and therefore totally irrelevant to determining whether it will survive and thrive going up against humans. Its intelligence is sufficient to guarantee its survival, its values get a free ride.

But is this right? Does a FOOMing AI really look like a single intelligence ramping up its own ability? This is certainly NOT the way evolution has gone about improving the intelligence of our species. Evolution tries many small modifications and then does natural experiments to see which ones do better and which do worse. By attrition it keeps the ones that did better and uses these as a base for further experiments.

My own sense of how I create using my intelligence is that I try many different things. Many are tried purely in the sandbox of my own brain, run as simulations there, and only the more promising kept for further testing and development. It seems to me that my pool of ideas is an almost random noise of "what ifs" and that my creative intelligence is the discrimination function filtering which of these ideas are given more resources and which are killed in the crib.

So intelligent creation seems to me to be very much like evolution, with competition.

Might we expect an AI to do something like this? To essentially hypothesize various modifications to itself, and then to test the more promising ones by running them as simulations, with increasing exactitude of the sims as the various ideas are winnowed down to the best ones?

Might an AI determine that the most efficient way to do this is to actually have many competing versions of itself constantly running, essentially, against each other? Might the FOOMing of an AI look a lot like the FOOMing of NI, which is what is going on on our planet right now?

I really don't know what the implications of this point of view are for FAI. I don't know whether this point of view is even at odds in any real way with SIAI's biggest worries.

I do wonder whether humanity is meant to survive when, in some sense, whatever comes next arrives. In one picture, the dinosaurs did not survive their design of mammals. (They designed mammals by putting a lot of selection pressure on mammals). In another picture, the dinosaurs did survive their design of mammals, but they survived by "slightly modifying" themselves into birds and lizards and stuff.

Th next step is electronic-based intelligence which is kick started on its evolution by us, just as we were kickstarted by plants (there are NO animals until you have plants), and plants were kickstarted by simpler life that exploited less abundant but more available energy in chemical mixes. Or the next step might be something that arrives through some natural path we are not considering carefully, either aliens invading, or a strong psi arising among the whales so that their intelligence grows enoguh to overcome their lack of digits.

WHatever the next step, if its presence has the human race survive and thrive by doing the equivalent of what turned dinosaurs in to birds, or turned wolves into domesticated dogs, does that count as Friendly or Unfriendly?

And is there really any point at all to fighting against it?

Comment author: Gabriel 16 February 2012 06:25:17PM *  2 points [-]

That is, the first AI to go FOOM is so far superior in ability to anything else in the world that its subsequent steps of evolution are unconstrained by any outside pressures, and only follow either some sort of internal logic of value-change as intelligence increases, or else follow no logic at all, go in some sense on a "random walk" through possible values.

The AI is not supposed to change it values, regardless of whether it is powerful enough to realize them. Values are not up for grabs. Once the AI has some values it either wins and reshapes reality according to them or loses. Changing the values is one form losing. It seems that mostly anything that counts as a value system would object to changing an agent subscribing to that system into an agent using something else, so the AI won't follow any internal logic of value-change (unless some other agent forces it) and if it changes its values it will be by mistake (so closer to a random walk). Part of the idea of FAI is to build an AI that won't make those mistakes.

My own sense of how I create using my intelligence is that I try many different things. Many are tried purely in the sandbox of my own brain, run as simulations there, and only the more promising kept for further testing and development. It seems to me that my pool of ideas is an almost random noise of "what ifs" and that my creative intelligence is the discrimination function filtering which of these ideas are given more resources and which are killed in the crib.

The ideas coming into your awareness are very strongly pre-filtered; creativity is far from random noise. For one, the ideas are all relevant and somehow extrapolated from your knowledge of the world. Some of them might seem stupid but its only because of the pre-selection -- they never get compared to the idea of 'blue mesmerizingly up the slightly irreverent ladder, then dwarf the pegasus with the quantum sprocket' (and even this still makes a lot of sense compared to most random messages).

WHatever the next step, if its presence has the human race survive and thrive by doing the equivalent of what turned dinosaurs in to birds, or turned wolves into domesticated dogs, does that count as Friendly or Unfriendly?

It counts as failure to preserve humanity. An AI that does that is probably unfriendly (barring the coercion by external powerful agents. Eliezer actually wrote a story about such scenario, without AIs though.)

And is there really any point at all to fighting against it?

Sure seems like it.

Comment author: mwengler 16 February 2012 07:10:05PM 1 point [-]

The ideas coming into your awareness are very strongly pre-filtered; creativity is far from random noise.

I agree but I don't think that changes my conclusions. In teaching humans to be more creative, they are taught to pay more attention for a longer time to at least some of the outlier ideas. Indeed, a lot of times I think the difference between the intellectually curious and creative people I like to interact with and the rest is that the rest have predecided a lot of things, turned their thresholds for "unreal" ideas coming in to consciousness up higher than I have turned mine. Maybe they are right more often than I am, but the real measure of why they do this is that their ancestors who outsurvived a lot of other people trying a lot of other things did that same level of filtering, and it resulted in winjning more wars, having more children that survived, killing more competitors, or some combination of these and other results that constitute selection pressures.

An AI in the process of FOOMing, which necessarily has the capacity to consider a lot more ideas in a lot more detail than we do, what makes you think that AI will constrain itself by the values it used to have? Unless you think we have the same values as the first self-replicating molecules that began life on earth, the FOOMing of Natural Intelligence (which has taken billions of years) has been accompanied by value changes.

Comment author: mwengler 16 February 2012 07:02:32PM 0 points [-]

The AI is not supposed to change it values, regardless of whether it is powerful enough to realize them. Values are not up for grabs. Once the AI has some values it either wins and reshapes reality according to them or loses.

A remarkably strong claim.

My initial reaction is that humanity's values have certainly changed over time. I think it would require some rather unattractive mental gymnastics to claim that people who beat their children for their own good and people who owned slaves and people who beat, killed, and/or raped either slaves or other people they had vanquished as their right "really" had the same values we currently have, but just hadn't really thought them through, or that our values applied in their world would have lead us to similar beliefs about right and wrong.

I had even thought my own values had changed over my lifetime. I'm not as sure of that, but what about that?

Certainly, it seems, as the human species has evolved its values have changed. Do chimpanzees and bonobos have different values than we do, or the same? If the same, I'd love to see your mental gymnastics to justify that, I would expect them to be ugly. If different, does this mean that our common ancestor has necessarily "lost," assuming its values were some intermediate between ours, chimps, and bonobos, and all of its descendants have different values than it had?

As I understand the word values, our values have changed over time, different groups of humans have some different values from each other, and if there is a "kernel" of common values in our species, that this kernel most likely differs from the kernel of values in homo neanderthalis or other sentient predecessors of modern homo sapiens.

So if NI (Natural Intelligence) in its evolution can change values (can it?) with generally broad consensus that "we" have not lost in this process, why would an AI be precluded from futzing with its values as it worked on self-modifying to increase its intelligence?

Comment author: APMason 16 February 2012 07:15:04PM 0 points [-]

Because, if the AI worked, it would consider the fact that if it changed its values, they would be less likely to be maximised, and would therefore choose not to change its values. If the AI wants the future to be X, changing itself so that it wants the future to be Y is a poor strategy for achieving its aims - the future will end up not-X if it does that. Yes, humans are different. We're not perfectly rational. We don't have full access to our own values to begin with, and if we did we might sometimes screw up badly enough that our values change. An FAI ought to be better at this stuff than we are.

Comment author: mwengler 16 February 2012 07:28:04PM 0 points [-]

I think assuming an AI cannot employ a survival strategy which NI such as ourselves are practically defined by seems extremely dangerous indeed. Perhaps even more importantly, it seems extremely unlikely that an AI which has FOOMed way past us in intelligence would be more limited than us in its ability to change its own values as part of its self modification.

The ultimate value, in terms of selection pressures, is survival. I don't see a mechanism by which something which can self modify will not ultimately wind up with values that are more conducive to its survival than the ones it started out with.

And I certainly would like to see why you assert this is true, are there reasons?

Comment author: APMason 16 February 2012 08:10:07PM 1 point [-]

Yes, reasons:

The AI is not subject to selection pressure the same way we are: it does not produce millions of slightly-modified children which then die or reproduce themselves. It just works out the best way to get what it wants (approximately) and then executes that action. For example, if what the AI values is its own destruction, it destroys itself. That's a poor way to survive, but then in this case the AI doesn't value its own survival. If there were a population of AIs and some destroyed themselves, and some didn't, then yes there would be some kind of selection pressure that led to there being more AIs of a non-suicidal kind. But that's not the situation we're talking about here. A single AI, programmed to do something self-destructive, will not look at its programming and go "that's stupid" - the AI is its programming.

it seems extremely unlikely that an AI which has FOOMed way past us in intelligence would be more limited than us in its ability to change its own values as part of its self modification.

It think "more limited" is the wrong way to think of this. Being subject to values-drift is rarely a good strategy for maximising your values, for obvious reasons: if you don't want people to die, taking a pill that makes you want to kill people is a really bad way of getting what you want. If you were acting rationally, you wouldn't take the pill. If the AI is working, it will turn down all such offers (if it doesn't, the person who created the AI screwed up). It's we who are limited - the AI would be free from the limit of noisy values-drift.

Comment author: TimS 16 February 2012 08:16:20PM *  0 points [-]

Humans have changed values to maximize other values (such as survival) throughout history. That's cultural assimilation in a nutshell. But some people choose to maximize values other than survival (e.g. every martyr ever). And that hasn't always been pointless - consider the value to the growth of Christianity created by the early Christian martyrs.

If an AI were faced with the possibility of self-modifying to reduce its adherence to value Y in order to maximize value X, then we would expect the AI to do so only when value X was "higher priority" than value Y. Otherwise, we would expect the AI to choose not to self-modify.

Comment author: mwengler 16 February 2012 07:23:11PM -1 points [-]

It counts as failure to preserve humanity. An AI that does that is probably unfriendly (barring the coercion by external powerful agents. Eliezer actually wrote a story about such scenario, without AIs though.)

Interesting. I think I may even agree with you. In that story each race would need to conclude that the other races are "unfriendly". So Eliezer has written a story in which all the NATURAL intelligences (except us of course) are "unfriendly," and in which a human would need to agree that from the point of view of the other intelligent races, human intelligence was "unfriendly."

Perhaps all intelligences are necessarily "unfriendly" to all other intelligences. This could even apply at the micro level, perhaps each human intelligence is "unfriendly" to all other human intelligences. This actually looks pretty real and pretty much like what happens in a world where survival is the only enforced value. Humans have the fascinating conundrum that even though we are unfriendly to the other humans, we have a much better chance of surviving and thriving by working with the other humans. The alliances and technical abilities and so on are, if not balance across all humans and all groups, at least balanced enough across many of them so that the result is a plethora of competing / cooperating intelligences where the jury is still out on who is the ultimate winner. Breeding in to us the ability (the value?) that "others" are our allies against "the enemies" clearly has resulted in collective efforts of cooperation that have produced quickly cascading production ability in our species. "We" worried about the Nazis FOOMing and winning, we worried the Soviets might FOOM and win. Our ancestors fought against every tribe that lived 5 miles away from them, before cultural evolution allowed them (us) to cooperate in groups of hundreds of millions.

So in Eliezer's story, 3 NI's have FOOMed and then finally run into each other. And they CANNOT resist getting up in each other's grills. And why not? what are the chances that the final intelligence IF only one is left will have been one which was shy about destroying potential competitors before they destroyed it?

Comment author: MileyCyrus 15 February 2012 08:05:36AM 0 points [-]

What's the best way to find out about scientific experiments before they are conducted?

Comment author: AlexSchell 15 February 2012 12:20:07PM 2 points [-]

I think ClinicalTrials.gov might be what you're looking for. For anything less than human clinical trials, you'd likely need inside knowledge of the organization conducting the study/experiment.

Comment author: Morendil 15 February 2012 10:19:13PM 6 points [-]

Psi powers.

Comment author: shminux 15 February 2012 09:31:02PM 0 points [-]

This question seems a bit vague. What kind of experiments? Why do you want to know about them in advance?

Comment author: MileyCyrus 16 February 2012 02:30:05AM 0 points [-]

What kind of experiments?

Mostly psychology. I'm particularly interested in experiments that would have political implications.

Why do you want to know about them in advance?

Because I want to be able to look at them and decide what kind of results would support a theory verses undermine it, before I (and the world) becomes biased by the actual results.

Comment author: shminux 16 February 2012 06:38:20AM 1 point [-]

Mostly psychology. I'm particularly interested in experiments that would have political implications.

Interesting. Maybe you can give examples of past experiments that had "political implications" and what theory they may have falsified.

Comment author: mwengler 16 February 2012 06:47:39PM -1 points [-]

Having read a lot of philosophers talking of morality here, and having read a lot of economists talking of utility, I think I will concentrate on the economists.

I was going to say I think my utility is maximized by spending no more time on the philosophers and using that on economists instead. But of course someone who chose the philosophers might say she believes the moral thing to do is to study the morality instead of the utility.

In physics sometimes you get to a point where your calculation involves subtracting an infinite quantity from another intfinite quantity in order to reach a finite result. Probably not often, but my recollection is there is a statistical mechanical calculation of the self-energy of an electron or some such where the only way forward is to pretend these two infinities difference to zero, and then from there you get results which are highly useful in predicting the real world's behavior. I think in a lot of these moral utility arguments, if you can't make the argument work using numbers of a trillion people or less, that you are too far out of anything real to have any faith at all that your arguments mean anything at all about the real world.

Does anybody know of any case in human history where some great improbable wrong was averted by people being concerned about improbable events that require the ^ character to be compactly expressed?

Comment author: endoself 16 February 2012 08:08:42PM 0 points [-]

Does anybody know of any case in human history where some great improbable wrong was averted by people being concerned about improbable events that require the ^ character to be compactly expressed?

I think you'd be better off looking for cases where some great improbable wrong occurred since no one was concerned about improbable events. That said, human history requires some very large numbers, but not any ^s.