Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: somervta 17 August 2014 10:25:28AM 12 points [-]

I don't suppose you have a source for the quote? (at this point, my default is to disbelieve any attribution of a quote unknown to me to Einstein)

Comment author: jazmt 17 August 2014 07:47:16PM 5 points [-]

according to this website (http://ravallirepublic.com/news/opinion/viewpoint/article_876e97ba-1aff-11e2-9a10-0019bb2963f4.html) it is part of 'aphorisms for leo baeck' (which I think is printed in 'ideas and opinions' but I don't have access to the book right now to check)

Comment author: Eliezer_Yudkowsky 02 June 2014 05:22:05PM 3 points [-]

Yeah, that never happened.

Comment author: jazmt 03 June 2014 11:57:30AM 1 point [-]

probably not, but why are you certain

Comment author: [deleted] 25 February 2014 08:12:22PM -1 points [-]

They probably wouldn't say "I'll put it off for thirty years", but rather repeatedly say " I'll put it off till tomorrow" .

And then they get a reminder that they only have a year left before they go back to work. And then they get a reminder that they only have six months left. Then three months. At that point, the time crunch is palpable. They have a concrete deadline, not a nebulous one.

And if they miss it? Well, they've learned for next time. That's an option unavailable to a dead person.

In response to comment by [deleted] on A defense of Senexism (Deathism)
Comment author: jazmt 26 February 2014 02:39:07AM *  0 points [-]

That doesn't strike me as how psychology works, since in the real world people often repeatedly make the same mistakes. It also seems that even if your proposal would work, it doesn't address the original issue since you are assuming that the person has a clear idea of his goals and only needs time to pursue them, whereas I think the bigger issue which aging encourages is reorienting ones values.

I appreciate your taking the time to address my question, but it seems to me that this conversation isn't really making progress so I will probably not respond to future comments on this thread. Thank you

Comment author: [deleted] 21 February 2014 02:17:26AM -1 points [-]

We are all arrogant to some degree or another, knowledge of or mortality helps keep it in check.

Do we have any evidence regarding this? I know there are parables serving to emphasize humility due to mortality, but I have no information on their effectiveness. It seems like it needs some immediacy to be effective, which means it only takes place when you start feeling old -- I'm guessing this will be forties to sixties for most Westerners.

Taking 10 years off after 30 years doesn't seem to solve the problem of the psychological issue, in today's world, as we get older we start noticing the weakness of our bodies which push us to act, since "if not now, when".

A well-funded, extended retirement is a perfect opportunity to do all the things you haven't had time to do while working. The threat of having to work for another few decades should be a reasonable proxy for the fear of death.

Specifically, people don't tell themselves they'll put things off for thirty years until the next retirement phase; they tell themselves they'll do it eventually. Thirty years is subjectively a very long time, and people won't be inclined to happily delay for that long.

Also, arguments about how we would be superbeings who are totally rational

are not included in anything I said here. My suggestion would require large societal changes and provides no mechanism to enact them, but it accounts for normal people, not rational agents.

In response to comment by [deleted] on A defense of Senexism (Deathism)
Comment author: jazmt 25 February 2014 04:42:11PM 0 points [-]

I would have to look around to see if there is non-anecdotal evidence, but anecdotally ~40 is when I have heard people start mentioning it.

I don't think your proposal would work since I don't think the time factor is the biggest issue, How often do people make big plans for summer vacation and not actually do them? They probably wouldn't say "I'll put it off for thirty years", but rather repeatedly say " I'll put it off till tomorrow" .

Comment author: RowanE 17 February 2014 05:44:46PM 0 points [-]

That sounds more like something that would motivate the side that's not already long-lived. They'd already have plenty of motivation. I'm saying the country that has access to the tech but wants to restrict is isn't going to have the will to fight.

Well, "not necessarily be beneficial" strictly means "is not certain to be beneficial", but connotationally means "is likely enough to prove not-beneficial that we shouldn't do it", so I ADBOC - it's conceivable that it could go wrong, but I think it's likely enough to have a beneficial enough outcome that we should do it anyway.

Comment author: jazmt 18 February 2014 02:02:44AM 1 point [-]

yes and that was the meaning of my initial comment, and that is a concern in today's world where we do have limited resources so that not everyone would be able to make use of such a technology. The country that has it (or the subset of people that have it within one country) will be motivated to defend their resources necessary to use it., This isn't an argument against such research in a world without any scarcity, but that isn't our world.

I am still not sure whether it is likely to be more beneficial or not for heavily emotional and biased humans like us.

In response to comment by jazmt on White Lies
Comment author: Alicorn 17 February 2014 06:53:30AM 1 point [-]

To me, it looks like consequentialists care exclusively about prudence, which I also care about, and not at all about morality, which I also care about. It looks to me like the thing consequentialists call morality just is prudence and comes from the same places prudence comes from - wanting things, appreciating the nature of cause and effect, etc.

In response to comment by Alicorn on White Lies
Comment author: jazmt 18 February 2014 01:56:00AM 1 point [-]

Thank you for all of your clarifications, I think I now understand how you are viewing morality.

Comment author: RowanE 17 February 2014 02:39:30AM 0 points [-]

I don't think anyone's willing to fight a war just to prevent another country's life expectancy from increasing.

Comment author: jazmt 17 February 2014 03:47:08AM 0 points [-]

Maybe, but on the other hand there is inequity aversion: http://en.wikipedia.org/wiki/Inequity_aversion

Also there is the possibility of fighting over the resources to use that technology (either within society or without). Do you disagree with the general idea that without greater rationality extreme longevity will not necessarily be beneficial or do you only disagree with the example?

In response to comment by jazmt on White Lies
Comment author: Alicorn 17 February 2014 01:49:32AM 1 point [-]

A, B, and C all look correct as stated, presuming situations really did meet the weird criteria for B and C. I think differences between consequentialism and deontology come up sometimes in regular situations, but less often when humans are running them, since human architecture will drag us all towards a fuzzy intuitionist middle.

I don't think I understand the last paragraph. Can you rephrase?

In response to comment by Alicorn on White Lies
Comment author: jazmt 17 February 2014 03:36:27AM -1 points [-]

Why don't you view the consequentialist imperative to always seek maximum utility as a deontological rule? If it isn't deontological where does it come from?

Comment author: SaidAchmiz 17 February 2014 02:41:01AM 0 points [-]

You keep using the words "we" and "our", but "we" don't have lifespans; individual humans do. So the relevant questions, it seems to me, are: is removing the current cap on lifespan in the interest of any given individual? And: is removing the current cap on lifespan, for all individuals who wish it removed, in the interests of other individuals in their (family, country, society, culture, world)?

Those are different questions. Likewise, the choice to make immortality available to anyone who wants it, and the choice to actually continue living, are two different choices. (Actually, the latter is an infinite sequence[1] of choices.)

Similarly I don't see how that argument indicates that we should develop longevity technologies until we solve the problem of human irrationality and evil.

No one is necessarily claiming that we should. Like I say in my top-level comment, this is a perfectly valid question, one which we would do well to consider in the process of solving the engineering challenge that is human lifespan.

[1] Maybe. Someone with a better-exercised grasp of calculus correct me if I'm wrong — if I'm potentially making the choice continuously at all times, can it still be represented as an infinite sequence?

Comment author: jazmt 17 February 2014 03:31:48AM 0 points [-]

"You keep using the words "we" and "our", but "we" don't have lifespans; individual humans do." Of course, but "we" is common shorthand for decisions which are made at the level of society, even though that is a collection of individual decisions (e.g. should we build a bridge, or should we legalize marijuana). Do you think that using standard english expressions is problematic? (I agree that both the question of benefit for the self and benefit for others is important and think the issue of cognitive biases is relevant to both of them)

I just looked at your comment, and I agree with that argument, but that hasn't been my impression of the view of many on this site (and clearly isn't the view of researchers like De Grey), however I am relatively new here and may be mistaken about that. Thank you for clarifying.

Comment author: SaidAchmiz 16 February 2014 07:44:54PM 1 point [-]

Some commentary on the matter is here: How to Seem (and Be) Deep.

Comment author: jazmt 17 February 2014 01:45:43AM *  0 points [-]

Thank you, but that post doesn't seem to answer my question, since it doesn't take up how death interplays with our cognitive biases. I agree that if we were perfectly rational beings immortality would be great, however I don't see how that implies that considering our current state that the choice to live forever (or a really long time) would be in our best interest.

Similarly I don't see how that argument indicates that we should develop longevity technologies until we solve the problem of human irrationality and evil. For example, would having a technology to live 150 years cause more benefit or would it cause wars over who gets to use the technology?

View more: Next