Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: gjm 22 May 2015 08:13:40AM 3 points [-]

One lesson your hypothetical 65-year-old should take away from both graphs is that they can't afford to spend what they're proposing to spend. The question they actually want anwered (or ought to) isn't "what's my probability of lasting 20 years with these numbers and a given investment allocation?", it's more like "given this much to invest and a given investment allocation, how much can I take out every year and still have, say, a 98% chance of not running out in 20 years?".

(Ideally we'd do this with a more brutal simulation that allows for occasional events substantially better or worse than in the historical record. And ideally the simulation would allow you to say not "take $X out every year" but "take out between $X and $Y every year, depending on the outcome of simulations performed at that point".)

Unfortunately the Vanguard simulator doesn't work on the computer I'm currently sat at, so I can't tell how stocks and bonds compare according to a metric of that sort. I firmly expect that stocks will still win, for what it's worth.

In response to comment by gjm on "Risk" means surprise
Comment author: gjm 22 May 2015 05:30:06PM 0 points [-]

Nope, actually stocks don't still win when what you want is to maximize widthdrawals subject to keeping Pr(run out of money) very small. The 98% level for bonds only is between $15k and $16k; for 50:50 it's a little better, somewhere between $16000 and $17000; for stocks only it's about $12k

(Another thing the simulator notably doesn't let you do: adjust your portfolio allocation over time.)

Comment author: TheSurvivalMachine 22 May 2015 09:09:06AM 0 points [-]

No worries! I appreciate that you ask questions. First I will make some clarifications about the four points in your previous comment.

  • The proposition: "People and/or other animals actually act so as to maximize genetic fitness" is, as you stated, not true. There is no disagreement about this.

  • We do not "get from there to" ethical fitnessism. In fact, we do not violate Hume's law at all, i.e., we do not deduce any normative ethical statement from a set of only factual statements.

  • The statement that: "'Acting so as to maximize genetic fitness' is a principle that approximates actual people's and cultures' ethical systems, but unlike them has some kind of scientific underpinning" is also neither true nor claimed by ethical fitnessism.

  • The norm that: "We should act so as to maximize genetic fitness" is not really a fitnessist norm, since fitnessism prescribes actions for individuals (and not so much for "us") and always specifies who's fitness and what kind of fitness we are talking about, namely the behavioural fitness of the individual in question. Instead, please see my original response to DeVliegendeHollander’s comment on the definition of ethical fitnessism and the rightness criterion based on Dawkins’s central theorem of the extended phenotype. The question about the word ‘should’ was addressed in my previous comment to you.

I hope this made everything clearer.

Since I am a fitnessist you should conclude that I:

  • approve of actions that increase my behavioural fitness, and disapprove of ones that do not;

    • care nothing about other ethical principles, unless they match up with this.
  • tend to act in ways that increase my behavioural fitness, even if doing so makes my own life markedly less pleasant.

  • tend to try to get other people to increase my own behavioural fitness.

Now to the examples:

  • Ethical fitnessism is actually not about having as many children as possible; rather it is about the long term survival of one's behavioural genes. The long term survival of an individual’s behavioural genes can be achieved in many ways, especially considering that an individual shares behavioural genes with many other individuals. For instance, all humans are closely related to all (and socially dependant on many) other humans, making humans exceptionally important to each other, but even other species are important due to our common heritage and shared behavioural genes. So in your example you should also consider the harm and possible injury caused to the woman and the increased risk to your female relatives, friends and children, especially. Socially and reproductively successful humans, both men and women, share a common interest in curbing violence and upholding the rule of law. Moreover fitnessism does not tell us to simply follow the instincts which have evolved due to natural selection. Since we humans have radically changed our environment with the emergence of modern society and technology, and since we have such a decisive impact on the future of life on Earth, we have to think much further ahead and afar than other animals. To exploit other individuals for selfish short-term gains at the expense of what we hold dear and valuable in the long term, is morally wrong. Rather than maximizing the number of her own offspring, a fitnessist acts so as to increase the probability of the long-term existence of the body of organic life which we are all part of and related to.

  • It simply is not "the only way" for you to pass on your genes, since you are not an alien. As explained in the previous example you can support the survival of your behavioural genes in other related individuals.

  • Of course you should care! You are related to every other human on the planet. But if you instead truly are an alien and therefore are genetically unrelated to life on Earth, you should still try to survive for as long as possible, because that is the behaviour which is favoured by natural selection, since your genes are inside your own body.

Ethical fitnessism states that an individual should maximize the behavioural fitness of this individual, not short term but in endless time (if this is the behaviour which tends to be maximized as a consequence of natural selection). If my behavioural fitness is in conflict with yours or not is a matter of to which extent we share behavioural genes. Humans share genes to a great extent with each other, so I believe that humans’ indexical Darwinian self-interests coincide more than they are in conflict. This leads to a decision method which is not treated in the original link, namely fitnessist contractarianism, which is universalizable. Fitnessist contractarianism is explained in “Ethical Fitnessism. The Ethic of the Fittest Behaviour”, which is in Swedish, I am afraid. A short explanation is that it is a method for human social and political decision making when humans are acting in accordance with their own Darwinian self-interest. To find common ground for social and political decision making for closely related individuals, such as humans, is clearly possible. For example, avoiding nuclear war seems to benefit each individual’s behavioural fitness, just as the common prosperity of humans seems to do.

As for ethical fitnessism being a moral theory, I think your argument is based on meta-ethics, wherefore I recommend that you read the original blog post that I linked to, giving extra attention to part 2: “The Non-universalizable Ethic of the Predator and the Quarry”. Rightness criteria are indexical, but decision methods are universalizable.

Of course this is related to the scientific question if “objective” moral values exist or not. I believe that no such values exist, since no such values have ever been observed, nor are necessary to explain anything in the natural world. Using Occam’s razor, I deny their existence. Instead I believe that “subjective” moral values exist, since I believe that such values are observed every day, when for example studying the behaviour of humans or other living organisms.

Regarding your last question, ethical fitnessism states that an individual should maximize his or her own behavioural fitness. Ethical fitnessism prescribes how an individual should act, wherefore it is a normative ethical statement. But the statement:

People (and other animals) have a tendency to act in ways that in evolutionary history have resulted in more copies of the genes they carry.

is a factual statement.

Comment author: gjm 22 May 2015 01:29:38PM 0 points [-]

If you hold a position that affects only your own actions and your opinions of them, I don't see much reason for calling it an ethical system. I read the blog post; that second section defends acting on non-universalizable principles, but doesn't so far as I can see defend thinking of them as moral principles and that's what I'm casting doubt on.

For clarity: I am also a moral nonrealist, and my doubts about the moral-theory-ness of fitnessism aren't because it doesn't involve a claim that its values are Objectively Right And Good. Rather: I think a moral theory is something that guides moral judgements by an adherent, and one feature of moral judgements is that they are applied to other people as well as to oneself. Something that affects only its bearer's own behaviour I would call a "preference" or a "motivation" or something of the kind, even if it gets expressed using the word "should".

I agree that if the principle is "maximize long-term number of copies of genes that influence my behaviour" then the counterintuitive consequences I described don't clearly follow. (I'm not sure they don't follow, though. The answer may depend on exactly what you're prepared to count as a "behavioural gene".)

It's true that most of my genes are shared with other human beings, even ones I wouldn't normally think of as related to me. But it's also true that a lot of my genes are shared with, say, pomegranates. Your restriction to "behavioural" genes doesn't (I think) make that problem go away; only in popular magazine articles are there genes for behaviours in any very strong sense; how sure are you that there are no genes you share with (some or all) pomegranates that have an effect on your behaviour? If it turns out that there are some, would you start regarding it as an important obligation to increase the number of pomegranates (at a rate, perhaps, of 1000 pomegranates per human life)?

I suspect that if we pay attention (as you do) to the very long term, it actually matters rather little in practice what we care about there -- in particular, it's likely that much the same actions now maximize long-term human happiness, long-term human numbers, long-term number of books-or-equivalent written, etc. (A similar thing happens in computer game-playing: the further ahead you look, the less the details of your evaluation function matter.) So it may be hard to distinguish between fitnessism and almost anything else, in terms of the actual decisions it provokes...

Comment author: Dahlen 22 May 2015 10:53:47AM *  1 point [-]

Same here, in fact I've been keeping an eye on that account for a while, and noticed when you expressed your complaints about downvoting in a discussion with him recently. There's no apparent sign of the sheer downvote rampages of old so far, if we're right he's been a little more careful this time around about obvious giveaways (or maybe it's just the limited karma)... Alas, old habits die hard.

I'm not even sure anyone can do anything about it; LessWrong is among those communities that are vulnerable to such abuses. Without forum rules relating to member conduct, without a large number of active moderators, without a culture of holding new members under close scrutiny until they prove themselves to bring value to the forum, but with a built-in mechanism for anyone to disrupt the forum activity of anyone else...

Comment author: gjm 22 May 2015 11:54:11AM 1 point [-]

It's interesting that you're confident of which account it is; I didn't say. I had another PM from another user, naming the same account (and giving reasons for suspicion which were completely different from mine). So: yeah, putting this all together, either it's him again or there are a whole bunch of similarities sufficient to ring alarm bells independently for multiple different users.

I don't see any need for anyone to swing the banhammer again unless he starts being large-scale abusive again, in which case no doubt he'll get re-clobbered. Perhaps by then someone will have figured out how to get his votes undone. (In cases where someone's been banned for voting abuses, come back, and done the same things again, I would be in favour of simply zeroing out all votes by the revenant account.)

Comment author: gjm 22 May 2015 08:13:40AM 3 points [-]

One lesson your hypothetical 65-year-old should take away from both graphs is that they can't afford to spend what they're proposing to spend. The question they actually want anwered (or ought to) isn't "what's my probability of lasting 20 years with these numbers and a given investment allocation?", it's more like "given this much to invest and a given investment allocation, how much can I take out every year and still have, say, a 98% chance of not running out in 20 years?".

(Ideally we'd do this with a more brutal simulation that allows for occasional events substantially better or worse than in the historical record. And ideally the simulation would allow you to say not "take $X out every year" but "take out between $X and $Y every year, depending on the outcome of simulations performed at that point".)

Unfortunately the Vanguard simulator doesn't work on the computer I'm currently sat at, so I can't tell how stocks and bonds compare according to a metric of that sort. I firmly expect that stocks will still win, for what it's worth.

Comment author: OrphanWilde 21 May 2015 08:43:24PM *  -5 points [-]

I don't care about my karma points. If I did I wouldn't create these kinds of posts, which aggravate people. All you've done is vent some of your evident anger. If I cared about my karma points, I wouldn't create more comments, such as this one, for you to downvote. Feel free, just try not to get yourself banned for abusing it.

Incidentally, the purpose of this post was to teach, since you state that you don't understand.

ETA: The phrasing of that last sentence comes off as more "smug" than I intended. Read it for its literal value, if you would.

Comment author: gjm 22 May 2015 07:54:51AM 1 point [-]

I don't care about my karma points

Oh, sorry, I should have made the following explicit: The point isn't to discourage you by making you suffer. It's to discourage other people from similar negative-value actions.

Comment author: OrphanWilde 21 May 2015 09:48:41PM -2 points [-]

And how effectively do you think you can teach, having just boasted of how you wasted your readers' time being deliberately stupid at them?

Depends on whether they're looking to learn something, or looking for reasons not to learn something.

You might say: Aha, you learned my lesson. But, as it happens, I already knew.

Actually, you are entirely correct: You already knew. I did not, in fact, "mug" you. The mugging was not in the wasting of the readers' time, that was merely what was lost; it was a conceptual mugging. Every reader who kept insisting on fighting the hypothetical was mugged with each insistence. In real life, they would have kept sticking to the same "I must have planned this wrong" line of reasoning - your first response was that this was the wrong line of reasoning. Which is why I brought up mugging, and focused on that instead; it was a better line of conversation with you.

But my hypothetical situation was no worse than most hypothetical situations, I was simply more honest about it. Hypothetical situations are most usually created to manufacture no-win situations for specific kinds of thought processes. This was no different.

Comment author: gjm 21 May 2015 10:11:11PM 2 points [-]

looking to learn something, or looking for reasons not to learn something.

Well, all I can say other than appealing to intuitions that might not be shared is this: I was looking to learn something when I read this stuff; I was disappointed that it seemed to consist mostly of bad thinking; after your confession I spent a little while reading your comments before realising that I couldn't and shouldn't trust them to be written with any intention of helping (or even not to be attempts at outright mental sabotage, given what you say this is all meant to be analogous to), at which point I gave up.

(If you're wondering, I'm continuing here mostly because it might be useful to other readers. I'm not very hopeful that it will, though, and will probably stop soon.)

Comment author: OrphanWilde 21 May 2015 08:43:24PM *  -5 points [-]

I don't care about my karma points. If I did I wouldn't create these kinds of posts, which aggravate people. All you've done is vent some of your evident anger. If I cared about my karma points, I wouldn't create more comments, such as this one, for you to downvote. Feel free, just try not to get yourself banned for abusing it.

Incidentally, the purpose of this post was to teach, since you state that you don't understand.

ETA: The phrasing of that last sentence comes off as more "smug" than I intended. Read it for its literal value, if you would.

Comment author: gjm 21 May 2015 09:13:04PM 2 points [-]

your evident anger

No, actually, not angry. I just think you did something of net negative value for crappy reasons.

I don't think I'm in any danger of getting banned for downvoting things that are blatantly (and self-admittedly) of negative value.

the purpose of this post was to teach

And how effectively do you think you can teach, having just boasted of how you wasted your readers' time being deliberately stupid at them? What incentive does anyone have to pay attention this time around?

(You might say: Aha, you learned my lesson. But, as it happens, I already knew.)

Comment author: Lumifer 21 May 2015 08:25:42PM 2 points [-]

at having been able to "mug" some other people in the discussion here

The usual verb is "to troll".

Comment author: gjm 21 May 2015 09:11:17PM 1 point [-]

I know, but OrphanWilde chose "mug" and I played along.

Comment author: gjm 21 May 2015 08:16:36PM *  3 points [-]

I don't understand the point of this.

I mean, I get that OrphanWilde is feeling very smug at having been able to "mug" some other people in the discussion here, and that this mugging is meant to be analogous both to the situation (deliberately incoherently) described in the article and to things that happen in real life.

But ... so what? Are we meant to be startled by the revelation that sometimes people exploit other people? Hardly.

And what seems to be one of the points you say you're trying to make -- that when this happens we are liable to assume it's our own fault rather than the other person's malice -- seems to me to be very ill supported by anything that's happened here. (1) I don't see other people assuming that the confusion here is their own fault, I see them trying to be tactful about the fact that it's yours. (2) I would expect posters here to be more willing to give benefit of the doubt than, e.g., in a business situation where they and the other party are literally competing for money. (3) You say "Here, I mugged you for a few seconds or maybe minutes [...] in real life, that would be hours, weeks, months" -- but I see no reason to expect people to be orders of magnitude slower in "real life" than here.

Further, you didn't in fact exploit anyone because (unless you're really malicious and actually enjoy seeing people waste time to no purpose, in which case fuck you) you didn't gain anything. You (at most) just wasted some people's time. Congratulations, but it's not like that's terribly hard to do. And, perhaps, you just made a bunch of people that little bit less inclined to be helpful and understanding to some confused-seeming guy on Less Wrong in the future.

I'm downvoting your post here and your replies in the comments, and would encourage other readers to do likewise. Making Less Wrong incrementally less useful in order to be able to preen about how you exploited people is not behaviour I wish to encourage here, and I see no actual insight (either overtly expressed or implicit) that counterbalances your act of defection.

[EDITED to add: OH HA HA I JUST MUGGED YOU AREN'T I CLEVER]

Comment author: OrphanWilde 21 May 2015 02:17:40PM *  -2 points [-]

OrphanWilde, do you envisage any scenario in which a project keeps (rationally) looking worthwhile despite lots of repeated slippages without this sort of drastic escalation?

Yes. Three cases:

First, the trivial case: You have no choice about whether or not to continue, and there are no alternatives.

Second, the slightly less trivial case: Every slippage is entirely unrelated. The project wasn't poorly scheduled, and was given adequate room for above-average slippage, but the number of things that has gone wrong is -far- above average. (We should expect a minority of projects to fit this description, but for a given IT career, everybody should encounter at least one such project.)

Third, the mugging case: The slippages are being introduced by another party that is calibrating what they're asking for to ensure you agree.

The mugging case is actually the most interesting to me, because the company I've worked for has been mugged in this fashion, and has developed anti-mugging policies. Ours are just to refuse projects liable to this kind of mugging - e/g, refuse payment-on-delivery fixed-cost projects. There are also reputation solutions, such as for dollar auctions - develop a reputation for -not- ignoring sunk costs, and you become less desirable a target for such mugging attempts.

[Edited to replace "i/e" with "e/g".]

Comment author: gjm 21 May 2015 03:53:50PM 1 point [-]

Trivial case: obviously irrelevant, surely? If you have no choice then you have no choice, and it doesn't really matter whether or not you estimate that it's worth continuing.

Slightly less trivial case: If you observe a lot more apparently-unrelated slippages than you expected, then they aren't truly unrelated, in the following sense: you should start thinking it more likely that you did a poor job of predicting slippages (and perhaps that you just aren't very good at it for this project). That would lead you to increase your subsequent time estimates.

Mugging: as with the "slightly less trivial" case but more so, I don't think this is actually an example, because once you start to suspect you're getting mugged your time estimates should increase dramatically.

(There may be constraints that forbid you to consider the possibility that you're getting mugged, or at least to behave as if you are considering it. In that case, you are being forced to choose irrationally, and I don't think this situation is well modelled by treating it as one where you are choosing rationally and your estimates really aren't increasing.)

View more: Next