Comment author: Curiouskid 28 December 2014 03:27:38AM *  2 points [-]

What is Ember Associates? I did a quick google search, and when I clicked on their site, I got a page that said "Website Expired". What other groups do you have in mind?

Comment author: ColonelMustard 28 December 2014 10:37:17PM 3 points [-]

It is, or was, an organisation to teach thinking skills. Please don't focus on the example; it was the first one that came to mind and I didn't realise the website had expired. The point is that a lot of groups claim to teach thinking skills. Do you consider all such count to be EA? If not, what distinguishes CFAR from those that don't?

Comment author: Error 26 December 2014 03:54:02PM 24 points [-]

we realized we had reached a local optimum and become stuck...So then we smashed everything with a hammer...and we think we're now out of the local optimum

Suggestion: A unit on identifying and escaping bad local optima, if you don't have one already. It seems to me that an awful lot of people-years are lost to situations that are sub-par but painful to get out of (e.g. crappy jobs).

Attention Workshop: A 2.5-day workshop on clearing mental space. This failed and taught us some important points about what doesn’t work.

I'd be curious to see a post-mortem on this and other failed efforts. I like that CFAR is willing to acknowledge when it's screwed up. That I don't find this willingness terribly surprising says some nice things about the LW-sphere it pulls from.

Comment author: ColonelMustard 26 December 2014 10:54:14PM 2 points [-]

Strongly agree with the last two sentences here.

Comment author: ColonelMustard 26 December 2014 10:35:41PM 5 points [-]

How does cfar rank other thinking skills organisations outside the EA/MIRI groups? For instance, is Ember Associates plausibly one of the most important organisations currently existing?

Comment author: ColonelMustard 10 May 2014 01:17:00PM *  0 points [-]

Any word on this? We submitted applications ~6 weeks ago and it would be useful to find out who will be offered a spot.

Comment author: ColonelMustard 11 April 2014 12:29:19PM 2 points [-]

Thought experiment. Imagine a machine that can create an identical set of atoms to the atoms that comprise a human's body. This machine is used to create a copy of you, and a copy of a second person, whom you have never met and know nothing about.

After the creation of the copy, 'you' will have no interaction with it. In fact, it's going to be placed into a space ship and fired into outer space, as is the copy of Person 2. Unfortunately, one spaceship is going to be very painful to be in. The other is going to be very pleasant. So a copy of you will experience pain or pleasure, and a copy of someone else will experience the other sensation.

To what extent do you care which copy receives which treatment? Zero? As much as you would care if it was you who was to be placed into the spaceship? Or something in between?

Comment author: radical_negative_one 09 December 2013 06:41:20AM 3 points [-]

The survey's exact wording is:

If multiple possible answers, please choose the one you most identify with.

So, if you for example grew up in France and currently live in the USA, and you thought of yourself primarily as being "from France" then France would be the correct answer. If you thought of yourself mainly as American, then USA would be the correct answer.

In other words, neither answer would be "wrong".

Comment author: ColonelMustard 09 December 2013 12:24:16PM 0 points [-]

"Where are you from" and "where do you live now" are different questions. The first of these has multiple answers for a lot of people I know; the second probably doesn't. I would suggest both questions be asked next year.

Comment author: ColonelMustard 09 December 2013 05:04:45AM 9 points [-]

Took the survey. I assume from the phrasing that 'country' means where I'm "from" rather than where I currently reside (there is more room for uncertainty about the former than about the latter). Might be interesting to put both questions.

Comment author: benkuhn 02 December 2013 03:28:48AM *  1 point [-]

That deflates that criticism. For the object-level social dynamics problem, I think that people will not actually care about those problems unless they are incentivised to care about those problems, and it's not clear to me that is possible to do.

Is epistemology the real failing, here? This may just be the communism analogy, but I'm not seeing how the incentive structure of EA is lined up with actually getting things done rather than pretending to actually get things done. Do you have a good model of the incentive structure of EA?

I don't think EA has to worry about incentive structure in the same way that communism does, because EA doesn't want to take over countries (well, if it does, that's a different issue). Fundamentally we rely on people deciding to do EA on their own, and thus having at least some sort of motivation (or, like, coherent extrapolated motivation) to actually try. (Unless you're arguing that EA is primarily people who are doing it entirely for the social feedback from people and not at all out of a desire to actually implement utilitarianism. This may be true; if it is, it's a separate problem from incentives.)

The problem is more that this motivation gets co-opted by social-reward-seeking systems and we aren't aware of that when it happens. One way to fix this is to fix incentives, it's true, but another way is to fix the underlying problem of responding to social incentives when you intended to actually implement utilitarianism. Since the reason EA started was to fix the latter problem (e.g. people responding to social incentives by donating to the Charity for Rare Diseases in Cute Puppies), I think that that route is likely to be a better solution, and involve fewer epicycles (of the form where we have to consciously fix incentives again whenever we discover other problems).

I'm also not entirely sure this makes sense, though, because as I mentioned, social dynamics isn't a comparative advantage of mine :P

(Responding to the meta-point separately because yay threading.)

Comment author: ColonelMustard 02 December 2013 04:57:57AM *  9 points [-]

EA doesn't want to take over countries

"Take over countries" is such an ugly phrase. I prefer "country optimisation".

Comment author: ChristianKl 29 October 2013 06:33:26PM 0 points [-]

You may be right. But, it is possible to convince intelligent non-rationalists to take UFAI x-risk seriously in less than an hour (I've tested this),

For what value of "taking seriously" is that statement true?

In response to comment by ChristianKl on MIRI strategy
Comment author: ColonelMustard 30 October 2013 01:26:20AM 0 points [-]

"Hear ridiculous-sounding proposition, mark it as ridiculous, engage explanation, begin to accept arguments, begin to worry about this, agree to look at further reading"

In response to MIRI strategy
Comment author: Vladimir_Nesov 28 October 2013 06:07:59PM 14 points [-]

Facing the Intelligence Explosion is a nontechnical introduction.

Comment author: ColonelMustard 29 October 2013 12:50:24PM 0 points [-]

I agree and I like it. I think it could be further optimised for "convince intelligent non-LWers who have been sent one link from their rationalist friends and will read only that one link", but it could definitely serve as a great starting point.

View more: Next