Comment author: topynate 12 December 2010 08:20:55PM 11 points [-]

"Do you want to know?" whispered the guide; a whisper nearly as loud as an ordinary voice, but not revealing the slightest hint of gender.

Brennan paused. The answer to the question seemed suspiciously, indeed extraordinarily obvious, even for ritual.

"Yes, provided that * * * ** * * * * * * * * ** * * ** * * * * ** * * * * **," Brennan said finally.

"Who told you to say that?", hissed the guide.

Comment author: FormallyknownasRoko 12 December 2010 08:44:26PM *  6 points [-]

Brennan is a fucking retard. No, you don't want to know. You want to signal affiliation with desirable groups, to send hard-to-fake signals of desirable presonality traits such as loyalty, intelligence, power and the presence of informed allies. You want to say everything bad you possibly can about the outgroup and everything good about the ingroup. You want to preech altruism and then make a plausible but unlikely reasoning error which conveniently stops you from having to give away anything costly.

All the other humans do all of these things. This is the true way of our kind. You will be punished if you deviate from the way, or even if you try to overtly mention that this is the way.

Comment author: steven0461 12 December 2010 06:39:25PM *  1 point [-]

Establishing a norm of giving away prizes creates very bad incentives and will tend to decrease the degree to which prizes actually motivate people in the future

On the other hand, it decreases the degree to which prizes are spent on ice-cream or movie tickets rather than charity. Evaluating a course of action means weighing the upsides against the downsides, not just listing a downside.

Comment author: FormallyknownasRoko 12 December 2010 07:10:11PM 1 point [-]

Yes but in reality the amounts concerned are good value for what they get.

Comment author: multifoliaterose 12 December 2010 05:51:35PM 6 points [-]

I'm flattered :-). Thanks again for taking the initiative to put the contest together. I agree with the suggestion that prizes for this sort of thing not be given away (and will not give my share away).

I submitted my article to jsalvatier for suggestions and he made some. I'll edit my article in response to some of these shortly.

Does anybody have suggestions for websites/newspapers/magazines where we might submit these articles to publicize the points made therein more broadly?

Comment author: FormallyknownasRoko 12 December 2010 05:59:33PM 0 points [-]

Yes, I will message you with details

Comment author: cousin_it 12 December 2010 05:49:28PM *  0 points [-]

Uh, spending effort on hurting people is negative-sum and most likely lose-lose, while teaching someone to hunt is positive-sum lose-win. Or maybe you see some deeper mystery here that I'm not seeing?

Comment author: FormallyknownasRoko 12 December 2010 05:51:58PM 1 point [-]

The problem with "lose-lose" is that it relies upon there being a "defualt outcome given no interaction". Vladimir is trying to taboo this concept, at least in general. So I am going to focus on a relevant special case, namely specific interactions available in the ancestral environment.

$100 for the best article on efficient charty - the winner is ...

19 FormallyknownasRoko 12 December 2010 03:02PM

Part of the Efficient Charity Article competition. Several people have written articles on efficient charity. The entries were:

The original criteria for the competition are listed here, but bascially the idea is to introduce the idea to a relatively smart newcomer without using jargon.

Various people gave opinions about which articles were best. For me, two articles in particular stood out as being excellent for a newomer. Those articles were:

Throwawayaccount_1

and

Multifoliaterose's

articles.
 

I therefore declare them joint winners, and implore our kind sponsor Jsalvatier to split the prize between them evenly. Throwawayaccount_1 should also unmask his/her identity.

[I would also ask the winners to kindly not offer to donate the money to charity, but to actually take the prize money and spend it on something that they selfishly-want, such as ice-cream or movie tickets or some other luxury item. Establishing a norm of giving away prizes creates very bad incentives and will tend to decrease the degree to which prizes actually motivate people in the future]

Comment author: Alicorn 10 December 2010 04:21:52PM 27 points [-]

I mean, seriously. I never want to know what it was and I significantly resent the OP for continuing to stir the shit and (no matter how marginally) increasing the likelihood of the information being reposted and me accidentally seeing it.

I award you +1 sanity point.

(I note that the Langford Basilisk in question is the only information that I know and wish I did not know. People acquainted with me and my attitude towards secrecy and not-knowing-things in general may make all appropriate inferences about how unpleasant I must find it to know the information, to state that I would prefer not to.)

Comment author: FormallyknownasRoko 12 December 2010 12:51:34PM *  0 points [-]

the only information that I know and wish I did not know.

I don't think it's quite that extreme. For example, I wish I wasn't as intelligent as I am, wish I was more normal mentally and had more innate ability at socializing and less at math, wish I didn't suffer from smart sincere syndrome. I think these are all in roughly the same league as the banned material.

Comment author: timtyler 12 December 2010 12:18:01PM *  1 point [-]

The usual Singularity Institute line is that it is worth trying too, I believe. As to what p(success) is, the first thing to do would be to make sure that the parties involved mean the same thing by "success". Otherwise, comparing values would be rather pointless.

Comment author: FormallyknownasRoko 12 December 2010 12:19:33PM *  -1 points [-]

This all reminds me of the dirac delta function. Its width is infinitesimal but its area is 1. Sure, it's worth trying in the "Dirac Delta Function" sense.

Comment author: timtyler 12 December 2010 12:03:40PM *  2 points [-]

What I mean is that, in my opinion, most of the risks under discussion are not like that. Large meteorites are a bit like that - but they are not very likely to hit us soon.

Comment author: FormallyknownasRoko 12 December 2010 12:06:59PM -1 points [-]

Ok, I see. Well, that's just a big factual disagreement then.

Comment author: timtyler 12 December 2010 11:20:48AM *  0 points [-]

I think your pooh-pooh'ing such infantile and amateurish efforts as there is silly when the reasoning is entirely bogus.

I hope I am not "pooh-pooh'ing". There do seem to be a number of points on which I disagree. I feel a bit as though I am up against a propaganda machine - or a reality distortion field. Part of my response is to point out that the other side of the argument has vested interests in promoting a particular world view - and so its views on the topic should be taken with multiple pinches of salt.

Why don't you refocus your criticism on the more legitimate weakness of existential risks: that is highly likely to be irrelevant (either futile or unnecessary), since by its own prediction, the relevant risks are highly complex and hard to mitigate against, and people in general are highly unlikely to either understand the issues or cooperate on them.

I am not sure I understand fully - but I think the short answer is because I don't agree with that. What risks there are, we can collectively do things about. I appreciate that it isn't easy to know what to do, and am generally supportive and sympathetic towards efforts to figure that out.

Probably my top recommendation on that front so far is corporate reputation systems. We have these huge, powerful creatures lumbering around on the planet, and governments provide little infrastructure for tracking their bad deeds. Reviews and complaints scattered around the internet is just not good enough. If there's much chance of corporation-originated intelligent machines, reputation-induced cooperation would help encourage these entities to be good and do good.

If our idea of an ethical corporation is one whose motto is "don't be evil", then that seems to be a pretty low standard. We surely want our corporations to aim higher than that.

Comment author: FormallyknownasRoko 12 December 2010 11:56:10AM 0 points [-]

What risks there are, we can collectively do things about.

Not necessarily. The risk might be virtually unstoppable, like a huge oil tanker compared to the force of a single person swimming in the water trying to slow it down.

Comment author: timtyler 12 December 2010 11:20:48AM *  0 points [-]

I think your pooh-pooh'ing such infantile and amateurish efforts as there is silly when the reasoning is entirely bogus.

I hope I am not "pooh-pooh'ing". There do seem to be a number of points on which I disagree. I feel a bit as though I am up against a propaganda machine - or a reality distortion field. Part of my response is to point out that the other side of the argument has vested interests in promoting a particular world view - and so its views on the topic should be taken with multiple pinches of salt.

Why don't you refocus your criticism on the more legitimate weakness of existential risks: that is highly likely to be irrelevant (either futile or unnecessary), since by its own prediction, the relevant risks are highly complex and hard to mitigate against, and people in general are highly unlikely to either understand the issues or cooperate on them.

I am not sure I understand fully - but I think the short answer is because I don't agree with that. What risks there are, we can collectively do things about. I appreciate that it isn't easy to know what to do, and am generally supportive and sympathetic towards efforts to figure that out.

Probably my top recommendation on that front so far is corporate reputation systems. We have these huge, powerful creatures lumbering around on the planet, and governments provide little infrastructure for tracking their bad deeds. Reviews and complaints scattered around the internet is just not good enough. If there's much chance of corporation-originated intelligent machines, reputation-induced cooperation would help encourage these entities to be good and do good.

If our idea of an ethical corporation is one whose motto is "don't be evil", then that seems to be a pretty low standard. We surely want our corporations to aim higher than that.

Comment author: FormallyknownasRoko 12 December 2010 11:51:25AM 0 points [-]

Agreed that there are vested interests potentially biasing reasoning.

View more: Next