Comment author: Calien 30 June 2015 01:45:49PM 0 points [-]

From the title of the post, I thought it would be about how not signing up gives you certainty. I've read someone who doesn't want to sign up say that dying in a normal way would give their family peace of mind.

In terms of whether it's a benefit, if it does motivate you then it's a good Dark Arts way to stop putting off signing up. However, cryonics companies changing their image to take advantage of it strikes me as a really bad idea for the reasons in Ander's post.

Comment author: RobFack 26 June 2015 10:49:31PM *  5 points [-]

I get the sense that many of the people who have signed up have done it less for the increased survival chances or the sense of comfort, but as a sort of flag waving. It is pretty good signalling that you are opposed to death and part of the ingroup that is opposed to death. Those little medallions are badges of a refusal to submit to the awful thing.

Comment author: Calien 30 June 2015 01:27:15PM 0 points [-]

You'd have to want to signal very strongly to overcome the inconvenience of doing the paperwork and forking over cold hard cash. Self-signalling seems to be a plausible motivation, but I'm not sure how much benefit you'd get from being able to tell other people about it. Not to mention the opposite pressure that most people have because they have to convince their close family members to respect their wishes.

Comment author: Calien 19 June 2015 02:44:35PM 5 points [-]

Today, I was using someone else's computer and typed "lesswrong" into the search/address bar. Apparently the next most popular search is "lesswrong cult". I started shrieking with laughter, getting a concerned reaction from the owner, which doesn't help our image much.

Comment author: Evan_Gaensbauer 27 May 2015 10:00:35AM 3 points [-]

On one hand, I'm not sure that's all of effective altruism. Those concerned about existential risk reduction, such as the MIRI, consider themselves part of effective altruism, and haven't always been about quantifying the value of ensuring a flourishing future civilization of trillions of human-like descendants in terms of quality-adjusted life years (henceforth referred to as QALYs). On the other hand, at the 2014 Effective Altruism Summit (I attended, and it's just a big EA conference), Eliezer Yudkowsky presented the potential value of the MIRI's work, given their work would prevent a counter-factual extinction of humanity and Earth-originating intelligence, in terms of QALYs. It was some extravagantly big number expressed in scientific notation, calculated as the expected years of happy life for so many trillions of future people. This is just my impression, but I think Mr. Yudkowsky and the MIRI did this to accommodate the rest of the community's knee-jerk demand for specific metrics.

I've also met several folk hailing from Less Wrong and its cluster in person-space with loftier visions of improving the fare of humanity in the nearer-term future, than just handing out mosquito nets or deworming children near the equator, who are lukewarm towards or supportive of effective altruism as a community. They seem to be dismissive of naive utilitarianism in effective altruism, too. I myself take issue with bringing too much utilitarianism injected into effective altruism. I think as effective altruism as a vehicle which took inspiration from utilitarianism, but would mostly serve as a motivator and coordinating network for pragmatic action among all sorts of people, rather than so much theory of ethics which can and should be picked apart. I admit we in effective altruism don't tackle this issue well. This could be because the opinion that utilitarianism is overriding what could be the dynamic rationality of effective altruism is a minority one. I'm not confident I and like-minded others can change that for the better.

Comment author: Calien 31 May 2015 09:56:12AM 0 points [-]

Evan - I am also involved in effective altruism, and am not a utilitarian. I am a consequentialist and often agree with the utilitarians in mundane situations, though.

drethelin - What would be an example of a better alternative?

Comment author: skeptical_lurker 26 May 2015 08:48:40AM 8 points [-]

I know this may come across as sociopathically cold and calculating, but given that post-singularity civilisation could be at least thirty orders of magnitude larger than current civilisation, I don't really think short term EA makes sense. I'm surprised that the EA and existential risk efforts seem to be correlated, since logically it seems to me that they should be anti-correlated.

And if the response is that future civilisation is 'far' in the overcoming bias sense, well, so are starving children in Africa.

Comment author: Calien 31 May 2015 09:48:32AM 0 points [-]

Proponents of both have the same attitude of "this is a thing that people ocassionally give lip service to, that we're going to follow to a more logical conclusion and actually act on".

Comment author: Elo 25 May 2015 10:05:42AM 4 points [-]

Upvote for agreement.

I find the extent of my power should be my concern. My local community; those who I can reach and touch. for the sake of drawing a number out of the air; anyone further than 100km from me does not deserve my attention; indeed anyone further than 50km probably also (except that I may one day cross paths with them).

I would rather spend $X towards the local homeless people of my city than the unknown suffering in a distant and far off place. (In fact I would rather not spend $X and would rather donate my time to the community nearby; which is exactly what I do)

While this is my opinion I certainly don't mind the EA stuff I see; I just don't partake in it very much.

Comment author: Calien 31 May 2015 09:39:32AM 0 points [-]

Is your rule about distances actually a base part of your ethics, or is it a heuristic based on you not having much to do with them? I'm assuming that you take it somewhat figuratively, e.g. if you have family in another country you're still invested in what happens to them.

Do you care whether the unknown people are suffering more? If donating $X does more than donating Y hours of your time, does that concern you?

Comment author: Eitan_Zohar 31 May 2015 03:04:22AM 1 point [-]

Is it unethical to have children pre-Singularity, for the risk of them dying?

Comment author: Calien 31 May 2015 09:13:56AM 0 points [-]

If everyone did that, there's a non-negligible chance the human race will die out before bringing about a Singularity. I care about a reasonably nice society with nebulous traits that I value existing, so I consider that a bad outcome. But I do worry about whether it's right to have children who may well posess my far-higher-than-average (or simply higher than most people are willing to admit?) aversion to death.

(If under reflection, someone would prefer not to become immortal if they had the chance, then their preference is by far the most important consideration. So if I knew my future kids wouldn't be too fazed by their own future deaths, I'd be fine with bringing them into the world.)

Comment author: adamzerner 12 April 2015 01:13:16AM *  0 points [-]
  1. My impression is that switching it up would be a bit confusing to the reader. In the spirit of making predictions, I'll say that I'm 70% confident that switching it up would cause confusion in readers (not sure how I'd define confusion :/ ). It'd be interesting to see research on this. Maybe how switching it up affects reading comprehension or something.

  2. For better or for worse, convention seems to be to use male pronouns, and I sense that deviation from this draws the readers attention towards pronoun use and away from the content. You may argue that this is an example of the legacy problem. Again, it'd be interesting to see if there was any similar research into this.

Comment author: Calien 24 April 2015 06:37:02AM 1 point [-]

Data point: Assuming there are any gendered pronouns in the examples, I find it weirder when the same one is used consistently for the entire article.

Comment author: [deleted] 19 January 2015 10:59:45AM *  1 point [-]

So nice that you two are able to enjoy LessWrong together. Given that this is an open threat, is there anything you (or Alex) would like to share about raising rationalists? My daughters are 3yo and 1yo, so I'm only beginning to think about this...

EDIT: I made a top-level post here.

In response to comment by [deleted] on Open thread, Jan. 19 - Jan. 25, 2015
Comment author: Calien 31 January 2015 11:20:36AM 1 point [-]

Has anyone gotten their parents into LessWrong yet? (High confidence that some have, but I haven't actually observed it.)

Comment author: savedpass 01 January 2015 02:24:08PM 5 points [-]

Ask yourself how much better or worse you are going to feel after completing a task you're procrastinating on. Write the answer down and put it to the test. After doing this several times I'm finding that I wrongly assume I'm going to feel worse after finishing the task, which makes me procrastinate.

Comment author: Calien 02 January 2015 11:54:43AM 2 points [-]

This reminds me of a CBT technique for reducing anxiety: when you're worried about what will happen in some situation, make a prediction, and then test it.

View more: Prev | Next