Comment author: Calien 19 June 2015 02:44:35PM 5 points [-]

Today, I was using someone else's computer and typed "lesswrong" into the search/address bar. Apparently the next most popular search is "lesswrong cult". I started shrieking with laughter, getting a concerned reaction from the owner, which doesn't help our image much.

Comment author: Evan_Gaensbauer 27 May 2015 10:00:35AM 3 points [-]

On one hand, I'm not sure that's all of effective altruism. Those concerned about existential risk reduction, such as the MIRI, consider themselves part of effective altruism, and haven't always been about quantifying the value of ensuring a flourishing future civilization of trillions of human-like descendants in terms of quality-adjusted life years (henceforth referred to as QALYs). On the other hand, at the 2014 Effective Altruism Summit (I attended, and it's just a big EA conference), Eliezer Yudkowsky presented the potential value of the MIRI's work, given their work would prevent a counter-factual extinction of humanity and Earth-originating intelligence, in terms of QALYs. It was some extravagantly big number expressed in scientific notation, calculated as the expected years of happy life for so many trillions of future people. This is just my impression, but I think Mr. Yudkowsky and the MIRI did this to accommodate the rest of the community's knee-jerk demand for specific metrics.

I've also met several folk hailing from Less Wrong and its cluster in person-space with loftier visions of improving the fare of humanity in the nearer-term future, than just handing out mosquito nets or deworming children near the equator, who are lukewarm towards or supportive of effective altruism as a community. They seem to be dismissive of naive utilitarianism in effective altruism, too. I myself take issue with bringing too much utilitarianism injected into effective altruism. I think as effective altruism as a vehicle which took inspiration from utilitarianism, but would mostly serve as a motivator and coordinating network for pragmatic action among all sorts of people, rather than so much theory of ethics which can and should be picked apart. I admit we in effective altruism don't tackle this issue well. This could be because the opinion that utilitarianism is overriding what could be the dynamic rationality of effective altruism is a minority one. I'm not confident I and like-minded others can change that for the better.

Comment author: Calien 31 May 2015 09:56:12AM 0 points [-]

Evan - I am also involved in effective altruism, and am not a utilitarian. I am a consequentialist and often agree with the utilitarians in mundane situations, though.

drethelin - What would be an example of a better alternative?

Comment author: skeptical_lurker 26 May 2015 08:48:40AM 8 points [-]

I know this may come across as sociopathically cold and calculating, but given that post-singularity civilisation could be at least thirty orders of magnitude larger than current civilisation, I don't really think short term EA makes sense. I'm surprised that the EA and existential risk efforts seem to be correlated, since logically it seems to me that they should be anti-correlated.

And if the response is that future civilisation is 'far' in the overcoming bias sense, well, so are starving children in Africa.

Comment author: Calien 31 May 2015 09:48:32AM 0 points [-]

Proponents of both have the same attitude of "this is a thing that people ocassionally give lip service to, that we're going to follow to a more logical conclusion and actually act on".

Comment author: Elo 25 May 2015 10:05:42AM 4 points [-]

Upvote for agreement.

I find the extent of my power should be my concern. My local community; those who I can reach and touch. for the sake of drawing a number out of the air; anyone further than 100km from me does not deserve my attention; indeed anyone further than 50km probably also (except that I may one day cross paths with them).

I would rather spend $X towards the local homeless people of my city than the unknown suffering in a distant and far off place. (In fact I would rather not spend $X and would rather donate my time to the community nearby; which is exactly what I do)

While this is my opinion I certainly don't mind the EA stuff I see; I just don't partake in it very much.

Comment author: Calien 31 May 2015 09:39:32AM 0 points [-]

Is your rule about distances actually a base part of your ethics, or is it a heuristic based on you not having much to do with them? I'm assuming that you take it somewhat figuratively, e.g. if you have family in another country you're still invested in what happens to them.

Do you care whether the unknown people are suffering more? If donating $X does more than donating Y hours of your time, does that concern you?

Comment author: Eitan_Zohar 31 May 2015 03:04:22AM 1 point [-]

Is it unethical to have children pre-Singularity, for the risk of them dying?

Comment author: Calien 31 May 2015 09:13:56AM 0 points [-]

If everyone did that, there's a non-negligible chance the human race will die out before bringing about a Singularity. I care about a reasonably nice society with nebulous traits that I value existing, so I consider that a bad outcome. But I do worry about whether it's right to have children who may well posess my far-higher-than-average (or simply higher than most people are willing to admit?) aversion to death.

(If under reflection, someone would prefer not to become immortal if they had the chance, then their preference is by far the most important consideration. So if I knew my future kids wouldn't be too fazed by their own future deaths, I'd be fine with bringing them into the world.)

Comment author: [deleted] 19 January 2015 10:59:45AM *  1 point [-]

So nice that you two are able to enjoy LessWrong together. Given that this is an open threat, is there anything you (or Alex) would like to share about raising rationalists? My daughters are 3yo and 1yo, so I'm only beginning to think about this...

EDIT: I made a top-level post here.

In response to comment by [deleted] on Open thread, Jan. 19 - Jan. 25, 2015
Comment author: Calien 31 January 2015 11:20:36AM 1 point [-]

Has anyone gotten their parents into LessWrong yet? (High confidence that some have, but I haven't actually observed it.)

Comment author: savedpass 01 January 2015 02:24:08PM 5 points [-]

Ask yourself how much better or worse you are going to feel after completing a task you're procrastinating on. Write the answer down and put it to the test. After doing this several times I'm finding that I wrongly assume I'm going to feel worse after finishing the task, which makes me procrastinate.

Comment author: Calien 02 January 2015 11:54:43AM 2 points [-]

This reminds me of a CBT technique for reducing anxiety: when you're worried about what will happen in some situation, make a prediction, and then test it.

Comment author: Calien 24 October 2014 11:27:40AM 47 points [-]

In-group fuzzes acquired, for science!

Comment author: [deleted] 10 September 2014 08:11:45PM *  0 points [-]

You mean that you don't have an entire Parliament filled with models designed to represent aspects of your own psychology?

You're buggy software running on corrupted hardware. Fork redundant copies and vote.

I hope that it will one day. I would rather not have to rely on tricks like this. I hope I'll eventually just be able to go straight from noticing dissonance to re-orienting my whole mind so it's in line with the truth and with whatever I need to reach my goals. Or, you know, not experiencing the dissonance in the first place because I'm already doing everything right.

No. Your mind is never magically going to turn nonbuggy. AFAICT, managing the bugginess is one of the most important but most understated tasks we face in life: all our interactions with other people are supposed to be pre-filtered for non-bugginess.

EDIT: This apparently came off as way more harsh than intended. Retracting for tone but leaving in existence.

Comment author: Calien 13 September 2014 05:43:21AM 0 points [-]

I've also used the "think of yourself as multiple agents" trick at least since my first read of HPMOR, and noticed some parallels. In stressful situations it takes the form of rational!Calien telling me what to do, and I identify with her and know she's probably right so I go along with it. Although if I'm under too much pressure I end up paralysed as Brienne describes, and there may be hidden negative consequences as usual.

Comment author: Sniffnoy 21 June 2014 08:57:34PM 4 points [-]

Several of the links in this post point to Google redirects rather than directly to the actual website. Could you fix this please? Thank you!

Comment author: Calien 22 June 2014 04:36:36AM *  1 point [-]

Also two redundant sentences:

I have a few ideas so far. The aim of these techniques is to limit the influence motivators have on our selection of altruistic projects, even if we allow or welcome them once we're onto implementing our plans.

The aim of these techniques is to limit the influence of motivators have when we are deciding which actions to take, even if we allow or welcome then once we’re onto implementing our plans.

View more: Prev | Next