Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

How To Lose 100 Karma In 6 Hours -- What Just Happened

-32 Post author: waitingforgodel 10 December 2010 08:27AM
As with all good posts, we begin with a hypothetical:
Imagine that, in the country you are in, a law is passed saying that if you drive your car without your seat belt on, you will be fined $100.
Here's the question: Is this blackmail? Is this terrorism?
Certainly it's a zero-sum interaction (at least in the short term). You either have to endure the inconvenience of putting on a seat belt, or risk the chance of a $100 fine.
You may also want to consider that cooperating with the seat belt fine may also cause lawmakers to believe that you'll also follow future laws.

If that one seems too obvious, here's another: A law is passed establishing a $500 fine for pirating an album on the internet.
Does this count as blackmail? does this count as terrorism?

What if, instead of passing a law, the music companies declare that they will sue you for $500 every time you pirate an album?
Is it blackmail yet? terrorism? Will complying teach the music companies that throwing their weight around works?

Enough with the hypothetical, this one's real: The moderator of one of your favorite online forums declares that if you post things he feels are dangerous to read, he will censor them. He may or may not tell you when he does this. If you post such things repeatedly, you will be banned.
Does this count as blackmail? Does this count as terrorism? Should we not comply with him to prevent similar future abuses of power?

Two months ago, I found a third option to the comply/revolt dilemma: turn the force back on the forceful.
Imagine this: you're the moderator of an online forum and care primarily about one thing: reducing existential risks. One day, one of your form members vows to ensure that censoring posts will cause a small increase in existential risks.
Does this count as blackmail? Does this count as terrorism? Would you not comply to prevent similar future abuses of power?


(Please pause here if you're feeling emotional -- what follows is important, and deserves a cool head)


It is my opinion that none of these are blackmail.
Blackmail is fundamentally a single shot game.
Laws and rules, are about the structure of the world's payoffs, and changing them to incentivize behavior.
Now it's fair to say that there are just laws, and there are unjust laws... and perhaps we should refuse to follow unjust laws... but to call a law blackmail or terrorism seems incorrect.

Here's what happened:
  • 7 weeks ago, I precommitted that censoring a post or comment on LessWrong would cause a 0.0001% increase in existential risk.
  • Earlier today, Yudkowsky censored a post on less wrong
  • 20 minutes later, existential risks increased 0.0001% (to the best of my estimation).

This will continue for the foreseeable future. I'm not happy about it either. Basically I think the sanest way to think about the situation is to assume that Yudkowsky's "delete" link also causes a 0.0001% increase in existential risk, and hope that he uses it appropriately.
He doesn't feel this way. He feels that the only correct answer here is to ignore the 0.0001% increase. We are at an impasse.

FAQ:
Q: Will you reconsider?
A: Sadly no. This situation is symmetric -- just as I am not immune to Yudkowsky's laws (censorship on LW if I talk about "dangerous" ideas), he is not immune to my laws.

Q: How can you be sure that a post was censored rather than deleted by the owner?
A: This is sometimes hard, and sometimes easy. In general I will err on the side of caution.

Q: How can you be sure that you haven't missed a deleted comment?
A: I use, and am improving, an automated solution.

Q: What is the nature of the existential risk increase?
A: Emails. (Yes, emails). Maybe some phone calls.
There is a simple law that I believe makes intuitive sense to the conservative right. A law that will be easy for them to endorse. This law would be disastrous for the relative chance of our first AI being a FAI vs a UFAI. Every time EY decides to take a 0.0001% step, an email or phone call will be made to raise awareness about this law.

Q: Is there any way for me to gain access to the censored content?
A: I am working on a website that will update in real time as posts are deleted from LessWrong. Stay tuned!

Q: Will you still post here under waitingforgodel
A: Yes, but less. Replying to 100+ comments is very time consuming, and I have several projects in dire need of attention.

Thank you very much for your time and understanding,
-wfg

Edit: This post is describing what happened, not why. For a discussion about why I feel that the precommitment will result in an existential risk savings, please see the "precommitment" thread, where it is talked about extensively.

Comments (214)

Comment author: Snowyowl 10 December 2010 08:20:12AM *  26 points [-]

You're participating in a flamewar here, though it's a credit to you, EY, and LessWrong that nobody has yet posted in all caps. Tempers are running high all around; I recommend that one or all parties involved stop fighting before someone gets hurt. (read: is banned, has their reputation irrevocably damaged, or otherwise has their ability to argue compromised).

0.0001% is a huge amount of risk, enough that if one person in six thousand did what you just did, humanity should be doomed to certain extinction. Even murder doesn't have such a huge effect. I think you overestimate the impact of your actions. Sending a few emails to a blogger has an impact I would estimate to be 10^(-15) or less.

Certainly making this post has little purpose beyond inciting an argument. All you'll do is polarise LessWrong and turn us against each other.

Comment author: Will_Sawin 10 December 2010 05:20:37PM 6 points [-]

Mildly interesting fact: I would have used capital letters in when I said "This doesn't require murdering 6,790 people," if not for this comment.

Is this type of praise, overall, effective in keeping the tone civil? Is it more effective than other methods?

Comment author: TheOtherDave 10 December 2010 07:59:58PM 3 points [-]

Well, a lot depends on what we mean by "effective" and "overall."

For example, it's a common observation by animal trainers that positive reinforcement training -- that is, rewarding the behavior that you want and ignoring the behavior that you don't want -- is a more effective form of behavior modification than many alternatives... in particular, than punishing the behavior you don't want.

That said, punishment is the fastest way of getting that behavior to stop in the environment you punish it in. And punishing it severely and consistently enough can also be very effective in getting it to stop in all environments.

The problem is knock-on effects. For example, if I beat my dog every time she barks around me, she'll quickly stop barking around me. She will also most likely stop choosing to be around me at all. Whether that was an effective form of behavioral modification depends a lot on my goals.

(There are other problems as well... for example, dispensing punishment can be rewarding for some people in some situations, which creates potential for escalations.)

The same principles apply to modifying human behavior, though it's generally counterproductive to call attention to it.

So, yes, praising civil behavior is an effective way of getting more of it, especially if you aren't seen as praising it in order to get more of it. More generally, rewarding civil behavior (e.g., by differentially attending to it, by awarding it karma, and so forth) is a way of getting more of it.

All of that said... there are more effective methods. Modeling the desired behavior can be way more cost-effective than rewarding it, for example, depending on the scale of the group and the number of modelers.

Comment author: Eliezer_Yudkowsky 10 December 2010 08:33:37AM 1 point [-]

I invite anyone who still sides with WaitingForGodel at this point to leave and find a site more suited to their intellects. I am sure it will only frustrate them and us to have them stick around.

Comment author: komponisto 10 December 2010 02:58:17PM 20 points [-]

Reversed stupidity not being intelligence, I'll point out that I "side with" waitingforgodel to the extent of disapproving of the censorship that occurred yesterday (though I haven't complained about the original censorship from July).

Needless to say, of course, I also think this post is silly.

Comment author: Aharon 10 December 2010 01:05:12PM 17 points [-]

I haven't followed the whole thing, because I couldn't. How can I decide wether he is right or not. I don't know what was censored, and why. Reading the thread on academic careers just had some big holes where, presumably, things were deleted, and I couldn't reconstruct why.

Other forums have some kind of policy, where they explicitly say what kind of post will be censored. I'm not against censoring stuff, but knowing what is worthy of being censored and what isn't would be nice.

With the knowledge I currently have about this whole thing, I still feel slightly sympathetic for WaitingForGoedel's cause. The "Free Speech is important" heuristic that Vladimir Nesov mentioned in the other thread is pretty useful, in my opinion, and without knowing the reason for posts being deleted, I can't decide for myself wether it made sense or not.

I intend to stick around, anyway, because I don't feel very strongly about this issue, so I won't frustrate anybody, I hope. But an answer would still be nice.

Comment author: rwallace 10 December 2010 04:48:20PM 13 points [-]

I do know what was censored and why, and I think Eliezer was wrong to delete the material in question.

That's a separate issue from whether waitingforgodel's method of expressing his (correct) disagreement with the censorship is sane or reasonable -- of course it isn't.

Comment author: Vaniver 10 December 2010 06:06:40PM 4 points [-]

That's a separate issue from whether waitingforgodel's method of expressing his (correct) disagreement with the censorship is sane or reasonable -- of course it isn't.

Though, I can see a strong argument for "blow up whenever your rights are threatened," especially if you expect that you will only be able to raise awareness, not effect change. It also means those of us who internalized the sequences have our evaporative cooling alarms triggering. Is disagreeing with the existence of Langford basilisks, and caring enough to make a stink about it instead of just scoff, really enough to show someone the door?

Comment author: rwallace 10 December 2010 06:43:17PM 13 points [-]

It's true that the basilisk in question is a wild fantasy even by Singularitarian standards, and that people took it seriously enough to get upset about it, could well be considered cause for alarm.

But that's not why people are telling waitingforgodel they'd rather he left. People are telling him that because he took action he sincerely (perhaps wrongly, but sincerely) believed would reduce humanity's chances of survival. That's a lot crazier than believing in basilisks!

And the pity is, it's not true he couldn't effect change. The right thing to do in a scenario like this is propose reasonable compromises (like the idea of rot13'ing posts on topics people find upsetting) and if those fail then, with the moral high ground under your feet, find or create an alternative site for discussion of the banned topics. Not only would that be morally better than this nutty blackmail scheme, it would also be more effective.

This is a great example of the general rule that if you think you need to do something crazy or evil for the greater good, you are probably wrong -- keep looking for a better solution instead.

Comment author: Vaniver 10 December 2010 07:03:47PM 4 points [-]

But that's not why people are telling waitingforgodel they'd rather he left. People are telling him that because he took action he sincerely (perhaps wrongly, but sincerely) believed would reduce humanity's chances of survival. That's a lot crazier than believing in basilisks!

I am not entirely clear on the timeline- I haven't researched his precommitment and whether or not EY saw it- but at some point EY commented in his Mod Voice that undeleting comments was subject to banning, and so that is the part where most people seem to agree that wfg went crazy.

So it's not "wow, you're murdering people to make a point?" that started people saying "maybe you ought not be here," but it certainly is what made that idea catch on.

And the pity is, it's not true he couldn't effect change. The right thing to do in a scenario like this is propose reasonable compromises (like the idea of rot13'ing posts on topics people find upsetting) and if those fail then, with the moral high ground under your feet, find or create an alternative site for discussion of the banned topics. Not only would that be morally better than this nutty blackmail scheme, it would also be more effective.

I agree with the desirability of this hypothetical. I have no data on the probability of this hypothetical.

Comment author: Eliezer_Yudkowsky 10 December 2010 08:38:00PM 0 points [-]

No, WFG committed to that before I said anything in Mod Voice.

Comment author: Vaniver 11 December 2010 12:54:32AM 3 points [-]

Clarification: I meant that his response to the Mod Voice comment was where he started losing supporters. (For example, here.)

Comment author: WrongBot 10 December 2010 06:22:20PM 6 points [-]

Is disagreeing with the existence of Langford basilisks, and caring enough to make a stink about it instead of just scoff, really enough to show someone the door?

No. Threatening to kill 6790 people and then claiming to actually gone through with it, however, is.

Comment author: Unnamed 10 December 2010 08:03:13PM 22 points [-]

All four examples involve threats - one party threatening to punish another unless the other party obeys some rule - but the last threat (threatening to increase existential risk contingent on acts of forum moderation) sticks out as different from the others in several ways:

  1. Proportionality. The punishment in the other examples seems roughly proportional to the offense ($500 may seem a bit high for one album, but is in the ballpark given the low chance of being caught), but over 6,000 deaths (in expectation) plus preventing who-knows-how-many people from ever living is disproportionate to the offense of deleting forum comments.
  2. Narrow targeting. Most of the punishments are narrowly targeted at the offender - the offender is the one who suffers the negative consequences of the punishment, as much as possible (although there are some broader consequences - e.g., the rest of the forum is deprived of a banned poster's comments). But the existential risk threat is not targeted at all - it's aimed at the whole world. Threats to third parties are usually frowned upon - think of hostage taking, or threats to harm someone's family.
  3. Legitimate authority. There are laws & conventions regarding who has authority over what, and these limit what threats are seen as acceptable. Threats can be dangerous and destructive, because of the possibility that they will actually be carried out and because of the risk of escalating threats and counter-threats as people try to influence each other's behavior, and these conventions about domains of authority help limit the damage. It's widely accepted that the government is allowed to regulate driving and intellectual property, and to use fines as punishment. The law grants IP-holders rights to sue for money. Forum moderators are understood to have control over what gets posted on their forum, and who posts. But a single forum user does not have the authority to dictate what gets posted on a forum.
  4. Accountability. Those with legitimate authority are usually accountable to a broader public. If citizens oppose a law they can replace the legislators with ones who will change the law, and since legislators know this and want to keep their jobs they pay attention to the citizens' views when passing laws. Members of an online forum can leave en masse to another forum if they disagree strongly with the moderation policy, and forums take this into account when they set their moderation policy. But one person who threatens to increase existential risk if his preferred forum policy isn't put into place is not accountable to anyone - it doesn't matter how many people disagree with his preferred forum policy, or with his proposed punishment.

I'm not entirely in agreement with the first three threats, but they're at least within the bounds of the kinds of threats that are commonly acceptable. The fourth is not.

Comment author: David_Gerard 10 December 2010 09:19:51PM *  8 points [-]

And 5. Ridiculousness. "He threatened what? ... And they took it seriously?"

(Posted as an example of a way this is notably different to the typical example. Note that this is also my reaction, but I might well be wrong.)

Comment author: Manfred 11 December 2010 04:07:41AM 3 points [-]

My bet would be that he believes that it is proportional. From where I'm standing, this looks like assigning too much impact to LW and to censorship of posts. Note that 2 and 4 are particularly good arguments why something of this nature was dumb regardless of importance.

Comment author: waitingforgodel 11 December 2010 05:15:14AM *  -1 points [-]

Re #1: EY claimed his censorship caused something like 0.0001% risk reduction at the time, hence the amount chosen -- it is there to balance his motivation out.

Re #2: Letting Christians/Republicans know that they should be interested in passing a law is not the same as hostage taking or harming someone's family. I agree that narrow targeting is preferable.

Re #3 and #4: I have a right to tell Christians/Republicans about a law they're likely to feel should be passed -- it's a right granted to me by the country I live in. I can tell them about that law for whatever reason I want. That's also a right granted to me by the country I live in. By definition this is legitimate authority, because a legitimate authority granted me these rights.

Comment author: wedrifid 11 December 2010 05:16:07AM *  3 points [-]

Re #1: EY claimed his censorship caused something like 0.0001% risk reduction at the time, hence the amount chosen -- it is there to balance his motivation out.

Citation? That sounds like an insane thing for Eliezer to have said.

Comment author: waitingforgodel 11 December 2010 06:11:56AM *  4 points [-]

After reviewing my copies of the deleted post, I can say that he doesn't say this explicitly. I was remembering another commenter who was trying to work out the implications on x-risk of having viewed the basilisk.

EY does say things that directly imply he thinks the post is a basilisk because of an x-risk increase, but he does not say what he thinks that increase is.

Edit: can't reply, no karma. It means I don't know if it's proportional.

Comment author: wedrifid 11 December 2010 06:20:22AM *  7 points [-]

Nod. That makes more sense.

One thing that Eliezer takes care to avoid doing is giving his actual numbers regarding the existential possibilities. And that is an extremely wise decision. Not everyone has fully internalised the idea behind Shut Up and Do The Impossible! Even if Eliezer believed that all of the work he and the SIAI may do will only improve our existential expectation by the kind of tiny amount you mention it would most likely still be the right choice to go ahead and do exactly what he is trying to do. But not everyone is that good at multiplication.

Comment author: TheOtherDave 11 December 2010 06:19:55AM 3 points [-]

Does that mean you're backing away from your assertion of proportionality?

Or just that you're using a different argument to support it?

Comment author: ronnoch 10 December 2010 10:39:44PM *  17 points [-]

This is an excellent cautionary tale about being careful what you precommit to.

Comment author: Jack 10 December 2010 04:56:50PM 46 points [-]

Hear that sound beneath your feet? It's the high-ground falling out from under you.

I'm offended by the censorship as well and was voting a number of your comments up previously. But as long as discussions of the censorship itself aren't being censored peaceful advocacy for a policy change and skirting the censors are the best strategies. And when the discussions of censorship start being censored the best strategy is for everyone to leave the site. This increasing risk nonsense is insanely disproportional. Traditionally, the way to get back at censors is to spread the censored material not blow up 2 1/2 World Trade Centers.

Comment author: NihilCredo 10 December 2010 07:37:13PM *  12 points [-]

As someone who only now found out about this whole nonsense, and who believes that the maximum existential risk increase you can cause on a whim has a lot more decimal zeros in front of it, I'd like to thank you for providing a quarter-hour of genuine entertainment in the form of quality Internet drama.

With regards to Eliezer deleting what he regards as Langford Basilisks, I don't think he should do it *, but I also think their regular deletion does not cause perceptible harm to the LessWrong site as I care about it. Now, if he were to censor people who oppose his positions on various pet issues, even only if they brought particularly stupid reasons, that would be different (I could see him eventually degenerating into "a post saying that uFAI isn't dangerous increases existential risk"), but as far as I know that's currently not the case and he has stated so outright.

* (I read Roko's banned post, and while I wouldn't confidently state that I suffered zero damage, I am confident I suffered less damage than I did half an hour ago by eating some store-bought salmon without previously doing extensive research on its provenance.)

Comment author: katydee 10 December 2010 11:10:11AM *  34 points [-]

At this point I must conclude either that you have no grasp whatsoever of the math involved here or that you're completely insane. Assuming your claim is correct (which I sincerely doubt), you just killed ~6,790 people (on average) because someone deleted a blog post. If you believe that this is a commensurate and appropriate response, I'm not sure what to say to you.

Honestly, if you believe that attempting to increase the chance that mankind is destroyed is a good response to anything and are willing to brag about it in public, I think something is very clearly wrong.

Comment author: Oscar_Cunningham 10 December 2010 11:32:47AM 7 points [-]

Maybe they are of the belief that censorship on LessWrong is severely detrimental to the singularity. Then such a response might be justified.

Comment author: Lightwave 10 December 2010 02:46:18PM *  8 points [-]

In that case they should present their evidence and/or a strong argument for this, not attempt to blackmail moderators.

Comment author: waitingforgodel 10 December 2010 06:18:12PM 3 points [-]

I actually explicitly said what oscar said in the discussion of the precommitment.

I also posted my reasoning for it.

Those are both from the "precommitted" link in my article.

Comment author: Lightwave 10 December 2010 08:57:49PM *  8 points [-]

Not quite sure how to respond..

Do you really think you're completely out of options and you need to start acting in a way that increases existential risk with the purpose of reducing it, by attempting to blackmail a person who will very likely not respond to blackmail?

Comment author: waitingforgodel 12 December 2010 08:54:53AM 3 points [-]

Yes. If I didn't none of this would make any sense...

Comment author: Will_Sawin 10 December 2010 12:15:20PM *  5 points [-]

Specifically, the argument against excessive punishment is this:

When dealing with humans, promising excessive punishment will not automatically move you to the "people do what you want" equilibrium. You need to prove you're serious. People will make mistakes. You will make mistakes.

This all requires punishing people.

This doesn't require murdering 6,790 people.

It seems like the sanest response would be to find some way of preventing waitingforgodel from viewing this site.

Comment author: David_Gerard 10 December 2010 12:43:49PM *  13 points [-]

It seems like the sanest response would be to find some way of preventing waitingforgodel from viewing this site.

No, because then you have to think of what a troll would do, i.e. whatever would upset people for great lulz. The correct answer is to ignore future silly persons, and hence the present silly person.

(Note that this does not require waitingforgodel to be trolling - I see no reason not to assume complete sincerity. This is about the example set by the reaction/response.)

Comment author: atucker 11 December 2010 11:30:29PM 2 points [-]

At the risk of sounding silly, I have a really minor question.

The 6790 people figure comes from multiplying the world's population by .0001, right? I feel like causing an existential catastrophe to occur is worse than that, not only does everyone alive die, but every human who could have lived in this part of the universe in the future is kept out of existence. Thus, intentionally trying to cause existential risk is much more serious.

Is there some particular reason that everyone is only multiplying by the world's population that I'm missing?

Comment author: ata 11 December 2010 11:34:23PM *  2 points [-]

No, you're right — talking about currently-living people is more just the very conservative lower bound, since we don't have a good way of calculating how many people could exist in the future if existential risks are averted.

Comment author: Vladimir_Nesov 12 December 2010 01:39:32AM *  1 point [-]

If existential risks are averted, you shouldn't count people, you should count goodness (that won't necessarily take the form of people or be commensurately influenced by different people). So the number of people (ems) we can fill the future with is also a conservative lower bound for that goodness, which knowably underestimates it.

Comment author: atucker 11 December 2010 11:46:12PM 0 points [-]

Okay, thanks. Just making sure that I wasn't completely messing up expected utility calculations.

Not that murdering only 6790 people is okay or anything...

Comment author: katydee 12 December 2010 12:51:01AM 0 points [-]

Yes, I wanted to make the most conservative estimation possible. The actual figure is probably far far higher, but since even the most conservative estimation involves killing thousands of people, it's bad enough as it is!

Comment author: Aleksei_Riikonen 11 December 2010 11:44:57PM 1 point [-]

At this point I must conclude either that you have no grasp whatsoever of the math involved here or that you're completely insane.

The good news is, this mentioned insanity that some LW posters have sunk to makes me think of this very entertaining Cthulhu fan video, which I will now share for the entertainment of all:

http://www.youtube.com/watch?v=XxScTbIUvoA

Comment author: nazgulnarsil 11 December 2010 03:24:28AM 25 points [-]

oh goody, lesswrong finally has its own super villain. is any community really complete without one?

Comment author: wedrifid 11 December 2010 04:47:36AM 10 points [-]

Dammit. Someone beat me to the punch on taking up the 'Clippy' role and it looks like someone beat me to it on roleplaying the supervillian too. I have to work on my reaction time if I'm going to get in on any of the fun stuff!

Comment author: Will_Newsome 12 December 2010 12:30:42AM 14 points [-]

WHAT?! We need a much better supervillain. Ideally a sympathetic well-intentioned one so we can have some black and grey morality flying back and forth. Someone like.... Yvain.

Comment author: Kevin 14 December 2010 10:09:57AM 7 points [-]

I figure it'll end up being Craig Venter. I mean, the man is bald.

Comment author: Manfred 11 December 2010 03:58:14AM 4 points [-]

This is an unprofitable way to think about the problem. If it becomes a Moral Imperative not to come to any sort of resolution, well then, we'll never see any sort of resolution.

Comment author: nazgulnarsil 11 December 2010 07:31:25AM *  10 points [-]

next time gadget! next time!

I can't really imagine a resolution at this point that doesn't signal vulnerability to trolls in the future.

edit: How about a script that prefaces waitingforgodel's posts with meanwhile, at the hall of doom:

Comment author: cousin_it 10 December 2010 09:41:29AM *  23 points [-]

I agree with Eliezer's comment asking you to leave. Even if LW had heavy censorship, I'd still read it (and hopefully participate) because of the great signal to noise ratio, which is what you're hurting with all your posts and comments - they don't add anything to my understanding.

Comment author: Leonhart 10 December 2010 01:56:47PM *  34 points [-]

I'm curious.

I am in the following epistemic situation: a) I missed, and thus don't know, BANNED TOPIC b) I do, however understand enough of the context to grasp why it was banned (basing this confidence on the upvotes to my old comment here

Out of the members here who share roughly this position, am I the only one who - having strong evidence that EY is a better decision theorist than me, and understanding enough of previous LW discussions to realise that yes, information can hurt you in certain circumstances - is PLEASED that the topic was censored?

I mean, seriously. I never want to know what it was and I significantly resent the OP for continuing to stir the shit and (no matter how marginally) increasing the likelihood of the information being reposted and me accidentally seeing it.

Of course, maybe I'm miscalibrated. It would be interesting to know how many people are playing along to keep the peace, while actually laughing at the whole thing because of course no mere argument could possibly hurt them in their invincible mind fortresses.

(David Gerard, I'd be grateful if you could let me know if the above trips any cultishness flags.)

Comment author: Alicorn 10 December 2010 04:21:52PM 27 points [-]

I mean, seriously. I never want to know what it was and I significantly resent the OP for continuing to stir the shit and (no matter how marginally) increasing the likelihood of the information being reposted and me accidentally seeing it.

I award you +1 sanity point.

(I note that the Langford Basilisk in question is the only information that I know and wish I did not know. People acquainted with me and my attitude towards secrecy and not-knowing-things in general may make all appropriate inferences about how unpleasant I must find it to know the information, to state that I would prefer not to.)

Comment author: Normal_Anomaly 12 December 2010 04:40:46PM 3 points [-]

Upvoted both the parent and the grandparent because I was nervous having no clue what was going on, looked at the basilisk, and would rather I hadn't. I'm not clever/imaginitive enough to be sure why I shouldn't have done it, but it was still a dumb move. I'm glad the thing was censored and I applaud leonhart for being sensible.

Comment author: Broggly 14 December 2010 07:18:21PM 0 points [-]

I'm not clever/imaginitive enough that I shouldn't have done it, if people really shouldn't do it. On the other hand, if I somehow find out people who have done it are taking drastic actions that would worry me enough to make further investigations, but as far as I can tell I'm probably better off knowing if that's the case (I think, depending on how altruistic those people are, what EY and the SIAI can actually do, how many worlds/"quantum immortality" work etc) Quite honestly it's far less of a worry to me than more mundane friendliness failures.

Comment author: FormallyknownasRoko 12 December 2010 12:51:34PM *  0 points [-]

the only information that I know and wish I did not know.

I don't think it's quite that extreme. For example, I wish I wasn't as intelligent as I am, wish I was more normal mentally and had more innate ability at socializing and less at math, wish I didn't suffer from smart sincere syndrome. I think these are all in roughly the same league as the banned material.

Comment author: Davorak 26 July 2011 08:01:42AM 8 points [-]

Why wish for:

I wish I wasn't as intelligent as I am, wish I was more normal mentally

and had less innate ability for math?

Why not just with for being better at socializing/communicating?

Comment author: TraderJoe 06 November 2012 11:41:46AM *  0 points [-]

[comment deleted]

Comment author: Strange7 06 February 2012 11:27:02PM 0 points [-]

Are you sure it's the basilisk itself you'd prefer to expunge, rather than some earlier concept without which you would lack the metabolic pathways for self-petrification?

Comment author: Tesseract 11 December 2010 05:03:51AM *  0 points [-]

Though reading this comment and others like it have managed to convince not to seek out the deleted post, I can't help but think that they would be aided by a reminder of what it means to be Schmuck Bait.

Comment author: Jonii 11 December 2010 09:41:10PM 4 points [-]

I sought out the dangerous idea right after I heard about the commotion, and I was disappointed. I discussed the idea, and thought about it hard, I'm still a bit unsure if I figured out why people think of the idea as dangerous, but to me it seems to be just plain silly.

I don't regret knowing it. I figured right from the start that the probability of it actually being dangerous was low enough that I don't need to care about it, and seems that my initial guess was right on the spot. And I really do dislike not knowing about things that everybody says are really dangerous and can cause me and my loved ones much agony for reasons no one is allowed to tell

Comment author: Jonii 12 December 2010 02:10:08AM *  1 point [-]

Oh, thanks to more discussion today, I figured out why the dangerous idea is dangerous, and now I understand why people shouldn't seek it. More like, the actual idea is not dangerous, but it can potentially lead to dangerous ones. At least, if I understood the entire thing correctly. So, I understand that it is harmful for us to seek that idea, and if possible, it shouldn't be discussed.

Comment author: Oscar_Cunningham 10 December 2010 02:04:49PM 4 points [-]

I feel the same as you, even though I know what the banned topic was. I haven't thought about it too deeply, because, well, duh.

Comment author: PlaidX 10 December 2010 10:09:09PM *  10 points [-]

I also regret contact with the basilisk, but would not say it's the only information I wish I didn't know, nor am I entirely sure it was a good idea to censor it.

When it was originally posted I did not take it seriously, it only triggered "severe mental trauma" as others are saying, when I later read someone referring to it being censored, and some curiosity regarding it, and I updated on the fact that it was being taken that seriously by others here.

I do not think the idea holds water, and I feel I owe much of my severe mental trauma to an ongoing anxiety and depression stemming from a host of ordinary factors, isolation chief among them. I would STRONGLY advise everyone in this community to take their mental health more seriously, not so much in terms of basilisks as in terms of being human beings.

This community is, as it stands, ill-equipped to charge forth valiantly into the unknown. It is neurotic at best.

I would also like to apologize for whatever extent I was a player in the early formation of the cauldron of ideas which spawned the basilisk and I'm sure will spawn other basilisks in due time. I participated with a fairly callous abandon in the SL4 threads which prefigure these ideas.

Even at the time it was apparent to anyone paying attention that the general gist of these things was walking a worrisome path, and I basically thought "well, I can see my way clear through these brambles, if other people can't, that's their problem."

We have responsibilities, to ourselves as much as to each other, beyond simply being logical. I have lately been reexamining much of my life, and have taken to practicing meditation. I find it to be a significant aid in combating general anxiety.

Also helpful: clonazepam.

Comment author: XiXiDu 11 December 2010 10:29:33AM *  5 points [-]

...when I later read someone referring to it being censored, and some curiosity regarding it, and I updated on the fact that it was being taken that seriously by others here.

If you join a community concerned with decision theory, are you surprised by the fact that they take problems in decision theory seriously?

There is no expected payoff in harming me just because decision theory implies it being rational. Because I do not follow such procedures. If something wants to waste its resources on it, I win. Because I weaken it. It has to waste resources on me it could use in the dark ages of the universe to support a protégé. And it never received any payoff for this, because I do not play along in any branch that I exist. You see, any decision theory is useless if you deal with agents that don't care about such. Utility is completely subjective too, as Hume said, "`Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger". The whole problem in question is just due to the fact that people think that if decision theory implies a strategy to be favorable then you have to follow through on it. Well, no. You can always say, fuck you! The might of God and terrorists is in the mind of their victims.

Comment author: Broggly 14 December 2010 07:32:36PM *  2 points [-]

If you join a community concerned with decision theory, are you surprised by the fact that they take problems in decision theory seriously?

Are they? Are they really? What actual, concrete actions have been taken, or are planned, regarding the basilisk? If people actually make material sacrifices based on having seen the Basilisk then I'm willing to take it seriously, if only for its effects on the human mind. Then again in the most worrying (or third most worrying, I guess) case, they would likely hide said activities to prevent anything damaging their plans. They could also hide it out of altruism to keep from disturbing halfway smart basilisk seers like us, I guess.

Comment author: drethelin 26 January 2012 05:38:05AM 2 points [-]

I'm pretty sure no one firmly believes in the basilisk simply because everyone who was convinced by it would be spreading it as much as they could.

Comment author: David_Gerard 10 December 2010 02:11:30PM *  14 points [-]

Not really :-) If you keep awareness of the cult attractor and can think of how thinking these things about an idea might trip you up, that's not a flawless defence but will help your defences against the dark arts.

What inspired you to the phrase "invincible mind fortresses"? I like it. Everyone thinks they live in one, that they're far too intelligent/knowledgeable/rational/Bayesian/aware of their biases/expert on cults/etc to fall into cultishness. They are of course wrong, but try telling them that. (It's like being smart enough to be quite aware of at least some of your own blithering stupidities.)

(I read the forbidden idea. It appears I'm dumb and ignorant enough to have thought it was just really silly, and this reaction appears common. This is why some people find the entire incident ridiculous. I admit my opinion could be wrong, and I don't actually find it interesting enough to have remembered the details.)

Comment author: Leonhart 10 December 2010 05:26:45PM 4 points [-]

Thank you. I've found your comments very useful, not least because when younger I came uncomfortably close to being parted from a reasonable sum of money, by a group who understood the Dark Arts rather well. That was before I read Cialdini, but I'm not sure how well it would have sunk in without the object lesson.

I'm not good at thinking things are silly. That's great for getting suspension of disbelief and fun out of certain things (for example, I can enjoy JRPG plots :) but it's also a spot where one can be hit for massive damage.

As for the happy phrasing, I might have been thinking of this. (Warning: 4chan, albeit one of its nicer suburbs.)

Comment author: [deleted] 10 December 2010 03:19:52PM 4 points [-]

(I read the forbidden idea. It appears I'm dumb and ignorant enough to have thought it was just really silly, and this reaction appears common. This is why some people find the entire incident ridiculous. I admit my opinion could be wrong, and I don't actually find it interesting enough to have remembered the details.)

Same here. I think (though no one has given a definitive answer) that there is concern about the general case of the specific hypothetical incident discussed therein, not the specific incident itself.

Comment author: Broggly 14 December 2010 07:24:01PM 0 points [-]

Hmm. I only read it recently, so maybe I haven't thought through the general case enough, but I think my solution (assuming it's not totally absurd) of treating it as though it is really silly with the caveat that if it becomes non-silly I'm not exactly powerless would work for all such cases.

Comment author: Vladimir_Nesov 10 December 2010 03:25:19PM *  0 points [-]

Everyone thinks they live in [invincible mind fortresses], that they're far too intelligent/knowledgeable/rational/Bayesian/aware of their biases/expert on cults/etc to fall into cultishness. They are of course wrong, but try telling them that.

Again you tell us. Some people who think that are right. They are NOT "of course" wrong. A random person isn't guaranteed to be vulnerable, and there are people for which you can say that they are most certainly invincible. That any person is "of course vulnerable" is of course wrong as a point of simple fact.

Comment author: TheOtherDave 10 December 2010 04:15:38PM 9 points [-]

I would be interested in hearing about your evidence for the existence of people who are "most certainly invincible" to cultishness, as I'm not sure how I would go about testing that.

Comment author: David_Gerard 10 December 2010 04:10:35PM 3 points [-]

I think a lot more people are vulnerable than consider themselves vulnerable. You can substitute "most" for "all" if you like.

Comment author: Vladimir_Nesov 10 December 2010 04:19:06PM *  1 point [-]

I think a lot more people are vulnerable than consider themselves vulnerable.

I mainly object to "of course", and your argument cited here (irrespective of its correctness) doesn't even try to support it. Please be more careful in what you use, you can't just throw an arbitrarily picked affective soldier, it has to actually argue for the conclusion it's supposed to support (i.e. be (inferential) evidence in its favor to an extent that warrants changing the conclusion).

Comment author: David_Gerard 10 December 2010 04:24:14PM *  1 point [-]

I think a lot more people are vulnerable than consider themselves vulnerable.

I mainly object to "of course", and the argument I cited here (irrespective of its correctness) doesn't even try to support it.

I wasn't making an argument (a series of propositions intended to support a conclusion), I was talking about the subject in passing. These are different modes of communication, and I would have thought it reasonably clear which one was being used.

The "of course" is because it's a cognitive error: people are sure it could never happen to them. I observe them being really quickly, really certain of that when they hear of someone else falling for cultishness - that's the "of course". In some cases this will be true, but it's far from universally true. I don't know which particular error or combination of errors it is, but it does seem to be a cognitive error. It is true that I do need to work out which ones it is so that I can talk about it without those people who reply "aha, but you haven't proven right here it's every single one, aha" and think they've added something useful to discussion of the topic.

Comment author: Vladimir_Nesov 10 December 2010 04:49:46PM *  1 point [-]

I see. So they can sometimes be accidentally correct in expecting that they are not vulnerable, as in fact they will not be vulnerable, but their level of certainty in that fact will almost certainly ("of course") be off in a systematic predictable way. This interpretation works.

I wasn't making an argument (a series of propositions intended to support a conclusion), I was talking about the subject in passing. These are different modes of communication, and I would have thought it reasonably clear which one was being used.

I think of the "talking about the subject in passing" mode as "making errors, because it's easier that way", which looks to me as a good argument for making errors, but they are still errors.

Comment author: JoshuaZ 10 December 2010 08:49:29PM *  9 points [-]

I saw the original post. I had trouble taking the problem that seriously in the general case. In particular, there seemed to be two obvious problems that arose from the post in question. One was a direct decision theoretic basilisk, the other was a closely associated problem that was empirically causing basilisk-like results to some people who knew about the problem in question. I consider the first problem (the obvious decision-theoretic basilisk) to be extremely unlikely. But since then I've talked to at least one person (not Eliezer) who knows a lot more about the idea who has asserted that there are more subtle aspects of the basilisk which could make it or related basilisks more likely. I don't know if that person has better understanding of decision theory than I do, but he's certainly thought about these issues a lot more than I do so it did move my estimate that there was a real threat here upwards. But even given that, I still consider the problems to be unlikely. I'm much more concerned about the pseudo-basilisk which empirically has struck some people. The pseudo-basilisk itself might justify the censorship. Overall, I'm unconvinced.

Comment author: TheOtherDave 10 December 2010 02:43:30PM 11 points [-]

It would be interesting to know how many people are playing along to keep the peace, while actually laughing at the whole thing because of course no mere argument could possibly hurt them in their invincible mind fortresses

In general, I treat attempts to focus my attention on any particular highly-unlikely-but-really-bad scenario as an invitation to inappropriately privilege the hypothesis, probably a motivated one, and I discount accordingly. So on balance, yeah, you can count me as "playing along" the way you mean it here.

I don't think my mind-fortress is invincible, and I am perfectly capable of being hurt by stuff on the Internet. I'm also perfectly capable of being hurt by a moving car, and yet I drive to work every morning.

And yes, if the dangerousness of the Dangerous Idea seems more relevant to you in this case than the politics of the community, I think you're miscalibrated. The odds of a power struggle in a community in which you have transient membership affecting your life negatively are very small, but I'd be astonished if they were anything short of astronomically higher than the odds of the Dangerous Idea itself affecting your life at all.

Comment author: Larks 10 December 2010 04:34:03PM *  9 points [-]

I agree.

Like Alicorn, this is the only thing I know that I wish I did not know.

On the plus side, it made me realise my utility function is not monotonic in knowledge.

Comment author: Vaniver 10 December 2010 05:51:17PM *  5 points [-]

I have read the idea. I am unscathed. It is not difficult to find, if you look.

There is some chance my mind fortress is better defended than other people's- I am known to be level-headed in situations with and without the presence of imminent physical harm- but I don't think that applies to this particular circumstance. It felt to me like something you would have to convince yourself to care about- and so for some people that may be easier than it is for others (or automatic).

Comment author: Hul-Gil 26 July 2011 01:19:09AM *  1 point [-]

Hi there, Vaniver. I figured I'd ask you about this, because others seem too disturbed by the idea for me to want to bring it up again. Anyway, I've been reading through old threads, and encountered mention of this "basilisk"... and now I'm extremely curious. What was this idea that made so many people uncomfortable?

Edit Update: On the advice of several people, I am leaving this alone for now. If I do go ahead and read it, I'll edit this post again with my thoughts.

Comment deleted 26 July 2011 02:05:53AM [-]
Comment author: Hul-Gil 26 July 2011 03:03:07AM 1 point [-]

Thanks! I had been wishing for a PM system... and here we had one all along.

Comment author: wedrifid 26 July 2011 05:21:43PM 2 points [-]

Thanks! I had been wishing for a PM system... and here we had one all along.

I know, it took me months to realize that the 'someone replied to me' envelope was actually a re-purposed indicator for a feature I had no idea existed.

Comment author: drethelin 26 January 2012 05:41:32AM 0 points [-]

I actually prefer conversations to be public if possible. It doesn't really harm anyone and it helps understanding of long dead threads to see more discussion of them

Comment author: wedrifid 26 January 2012 06:31:53AM 0 points [-]

I actually prefer conversations to be public if possible. It doesn't really harm anyone and it helps understanding of long dead threads to see more discussion of them

Some conversations are more personal and wouldn't be appropriate if public.

Comment author: drethelin 26 January 2012 06:44:49AM 0 points [-]

we don't disagree

Comment author: Alicorn 26 July 2011 01:37:36AM 5 points [-]

Please abandon this project, for your safety and comfort, that of people you might tell, and that of others who your "benefactor" might be disposed to tell if you succeed in weakening someone's resolve to keep it safely secret.

Comment author: Hul-Gil 26 July 2011 01:41:29AM *  1 point [-]

Since several posters reported that they were not affected by the basilisk, I am thinking my mental safety and comfort might not be affected. (I'm assuming you're referring to the possibility of anxiety, etc? I do suffer from anxiety, but I've had to learn to deal with fairly horrific things, so I am not easily disturbed any more.) I certainly won't tell anyone, even if I had someone to tell, and if someone has resolved to keep it secret I doubt they will tell me in the first place.

I'm not too worried about finding out, though; if no one wants to say, I won't pressure anyone to. That's why I have asked someone who wasn't affected: they will surely be able to judge without fear making them irrational. If they still don't want to say, I'll just live with being curious.

Comment author: wedrifid 26 July 2011 05:26:35PM 2 points [-]

Since several posters reported that they were not affected by the basilisk, I am thinking my mental safety and comfort might not be affected.

I encourage you to accede to the tribal wishes and not tell anyone about the idea at least within the tribe and the scope of where lesswrong can claim any influence whatsoever (as you've already agreed). As you say, you don't sound like the sort of person who could be harmed by reading it personally so need not be concerned for your own sake.

Comment author: lessdazed 16 August 2011 05:31:56AM *  0 points [-]

It seems like it would be easy to predict an individual's reaction to the thing by looking for correlated reactions between that and some other things from people who have seen it all, and then seeing how a given innocent reacts to those other things.

I bet some pretty strong patterns would emerge, and we could predict reactions to the thing. I do not think that protecting people from harm now is a true objection, for it could be dealt with by identifying vulnerable people and not making the whole topic such forbidden fruit.

Comment author: prase 11 December 2010 12:33:09AM 0 points [-]

It depends on how strongly you believe in singularity. It is easy to ignore the whole thing as silly (which is essentially what I do), but if you have slightly different priors (or reasoning), it may be harmful.

Comment author: Vaniver 11 December 2010 01:01:08AM 1 point [-]

It depends on how strongly you believe in singularity.

While part of it, that doesn't appear to be all of it. It seems like it only applies for a narrow range of possible singularities. I keep coming back to visibility bias when I think about this.

Comment author: Vladimir_Nesov 10 December 2010 03:21:31PM *  5 points [-]

It would be interesting to know how many people are playing along to keep the peace, while actually laughing at the whole thing because of course no mere argument could possibly hurt them in their invincible mind fortresses

I'm certain that the forbidden topic couldn't possibly hurt me (probability of that is zilch). Still, I agree that from what we know, considering it should be discouraged, based on an expected utility argument (it either changes nothing or hurts tremendously with tiny probability, but can't correspondingly help tremendously because human value is a narrow target). Don't confuse these two arguments.

(I think this is my best summary of the shape of the argument so far.)

Comment author: Psy-Kosh 10 December 2010 05:01:44PM *  14 points [-]

(EDIT2: Looking at the discussion here, I am now reminded that it is not just potentially toxic due to decision theoretic oddities, but actually already known to be severely psychologically toxic to at least some people. This, of course, changes things significantly, and I am retracting my "being bugged" by the removal.)

The thing that's been bugging me about this whole issue is even given that a certain piece of information MAY (with really tiny probability) be highly (for lack of a better word), toxic... should we as humans really be in the habit of "this seems like dangerous idea, don't think about it"?

I can't help but think this must violate something analogous (though not identical) to an ethical injunction. ie, chances of human encountering inherently toxic idea are so small vs cost of smothering one's own curiosity/allowing censorship not due to trollishness or even revelation of technical details that could be used to do really dangerous thing, but simply because it is judged dangerous to even think about...

I get why this was perhaps a very particular special circumstance, but am still of several minds about this one. "Don't think about deliciously forbidden dangerous idea, just don't", even if perhaps actually is indicated in certain very unusual special cases, seems like the sort of thing that one would, as a human, want injunctions against.

Again, I'm of several minds on this however.

(EDIT: Just to clarify, that does not mean that I in any way approve of "existential threat blackmail" or that I'm even of two minds about that. That's just epically stupid)

Comment author: David_Gerard 11 December 2010 03:19:11PM 1 point [-]

(EDIT2: Looking at the discussion here, I am now reminded that it is not just potentially toxic due to decision theoretic oddities, but actually already known to be severely psychologically toxic to at least some people. This, of course, changes things significantly, and I am retracting my "being bugged" by the removal.)

Yeah, that was the reason that convinced me its removal from here was a good enough idea to bother enacting. I wouldn't try removing it from the net, but due warning is appropriate. Such things attract curious monkeys to test the wet paint - but! I still haven't seen 2 Girls 1 Cup and have no plans to! So it's not assured.

Comment author: Strange7 06 February 2012 06:19:34AM 0 points [-]

I've seen it. It's not really as interesting as the hype would suggest.

Comment deleted 11 December 2010 03:03:21AM *  [-]
Comment author: Broggly 14 December 2010 07:42:12PM 2 points [-]

Really? That seems odd. It would be pretty silly for it to affect those who don't know about it. That would just be pointless.

Comment deleted 15 December 2010 05:36:25AM [-]
Comment author: JoshuaZ 15 December 2010 05:58:11AM 2 points [-]

Wow, that's even more impressive than the claim made by some Christian theologians that part of the enjoyment in heaven is getting to watch the damned be tormented. If any AI thinks anything even close to this then we have failed Friendliness even more than if we made a simple object maximizer.

Comment author: Eugine_Nier 15 December 2010 06:28:25AM 4 points [-]

Next thing you're going to tell me that an FAI shouldn't push fat people in front of trolleys.

Note: A sufficiently powerful FAI shouldn't need to, but that is different from saying it wouldn't.

Comment author: Bongo 10 December 2010 05:17:27PM 6 points [-]

I too regret knowing the idea.

Comment author: benelliott 11 December 2010 06:07:30PM 2 points [-]

I've never seen the basilisk (and I have just about resisted the very powerful urge to seek it out), but if one of us came up with a dangerous idea, is it not likely that an AI would do the same. Taking into account the vastly greater possibility of an AI to cause harm if 'infected', might we not gain more from looking at the problem now in case we can find a resolution (perhaps a better decision theory) and use that to avert a genuinely catastrophic outcome. Even if our hopes of solving the problem are not high, the probabilities and utilities may still advise it.

Of course, since I haven't seen it, I might be totally misunderstanding the situation, or maybe there is an excellent reason why the above is wrong that I can't understand without exposing myself to the basilisk. Even if this isn't the case, it might still be best for a few people who have already seen it to work on the problem, rather than informing someone like me who probably wouldn't be much help anyway.

If it's not too much trouble, could you at least sate my burning curiosity by telling me which of the three options above, if any, is correct.

Comment author: Eliezer_Yudkowsky 11 December 2010 08:24:58PM 6 points [-]

You're totally misunderstanding the situation.

Comment author: benelliott 11 December 2010 09:57:09PM 6 points [-]

Thanks.

Comment author: Eliezer_Yudkowsky 10 December 2010 08:42:07PM -3 points [-]

Aw, look, it's someone sane.

Comment author: cousin_it 13 December 2010 04:31:14PM *  11 points [-]

Hi Eliezer. It took me way too long to figure out the right question to ask about this mess, but here it is: do you regret knowing about the basilisk?

Comment author: Eliezer_Yudkowsky 13 December 2010 10:13:33PM 11 points [-]

I regret that I work in a job which will, at some future point, require me to be one of maybe 2 or 3 people who have to think about this matter in order to confirm whether any damage has probably been done and maximize the chances of repairing the damage after the fact. No one who is not directly working on the exact code of a foomgoing AI has any legitimate reason to think about this, and from my perspective the thoughts involved are not even that interesting or complicated.

The existence of this class of basilisks was obvious to me in 2004-2005 or thereabouts. At the time I did not believe that anyone could possibly be smart enough to see the possibility of such a basilisk and stupid enough to talk about it publicly, or at all for that matter. As a result of this affair I have updated in the direction of "people are genuinely that stupid and that incapable of shutting up".

This is not a difficult research problem on which I require assistance. This is other people being stupid and me getting stuck cleaning up the mess, in what will be a fairly straightforward fashion if it can be done at all.

Comment author: Alicorn 10 December 2010 04:17:20PM 17 points [-]

Is there anything I, as an individual you have chosen to hold hostage to Eliezer's compliance via your attempts at increasing existential risk, can do to placate you? Or are you simply notifying us that resistance is futile, we will be put at risk until you get the moderation policy you want?

Comment author: Emile 10 December 2010 09:40:23AM *  16 points [-]

Enough with the hypothetical,this one's real: The moderator of one of your favorite online forums declares that if you post things he feels are dangerous to read, he will censor them. He may or may not tell you when he does this. If you post such things repeatedly, you will bebanned.

Does this count as blackmail? Does this count as terrorism? Should we not comply with him to prevent similar future abuses of power?

Have you considered that not everyone feels as strongly as you do about moderators deleting posts in online communities?

To those of us who think that moderators deleting stupid or dangerous content can be an essential ingredient to maintaining the level of quality, your post comes off as silly as threatening to kill a kitten unless LessWrong.com is made W3C compliant by 2011.

(That isn't to say moderation can't have problems - after all, lesswrong's voting system is a mechanism to improve on it. But it's a far cry from "can be improved" to "must be punished".)

Comment author: David_Gerard 10 December 2010 12:33:59PM 10 points [-]

Please, please fix "loose" in the title.

Comment author: Oscar_Cunningham 10 December 2010 02:06:07PM 4 points [-]

This will cause it to end up in the RSS readers twice, thus being twice as annoying as before.

Comment author: DSimon 10 December 2010 04:55:42PM 4 points [-]

This is a bug in the RSS feed populating mechanism. :-\

Comment author: JoshuaZ 10 December 2010 06:48:35PM 14 points [-]

I'm curious, would you object if similar censorship occurred of instructions on how to make a nuclear weapon? What if someone posted code that they thought would likely lead to a very unFriendly AI if it were run? What if there were some close to nonsense phrase in English that causes permanent mental damage to people who read it?

I'm incidentally curious if you are familiar with the notion that there's a distinction between censorship by governments as opposed to private organizations. In general, most people who are against censorship agree that private organizations can decide what content they do and do not allow. Thus for example, you probably don't object to Less Wrong moderators removing spam. And we've had a few people posting who simply damaged the signal to noise ratio (like the fellow who claimed that he had ancient Egyptian nanotechnology that had been stolen by the rapper Jay-Z). Is there any difference between those cases and the case you are objecting to? As far as I can tell, the primary difference is that the probability of very bad things happening if the comments are around is much higher in the case you object to. It seems that that's causing some sort of cognitive bias where you regard everything related to those remarks (including censorship of those remarks) as more serious issues than you might otherwise claim.

Incidentally, as a matter of instrumental rationality, using a title that complains about the karma system is likely making people less likely to take your remarks seriously.

Comment author: NihilCredo 10 December 2010 10:24:44PM *  10 points [-]

the fellow who claimed that he had ancient Egyptian nanotechnology that had been stolen by the rapper Jay-Z

what

Can you link me to this? Please? S/N ratio be damned, I need to read it.

Comment author: [deleted] 10 December 2010 11:15:14PM 8 points [-]
Comment author: NihilCredo 11 December 2010 07:00:19AM 7 points [-]

Thank you. It's fantastic.

I went to school at my family's Kingdom of Oyotunji Royal Academy where we learn about the ancient science of astral physics.

This was even more hilarious after I found out that Oyotunji is in North Carolina.

Comment author: Sniffnoy 11 December 2010 07:24:21AM 1 point [-]

Note that his earliest posts (in particular the ones where he mentioned Jay-Z) seem to have been deleted as spam...

Comment author: lsparrish 10 December 2010 03:40:26PM 14 points [-]

Would the comment have been deleted if the author had ROT13'd it?

Would the anti-censors have been incensed by the moderator ROT13-ing the content instead of deleting it?

Comment author: wedrifid 11 December 2010 04:49:48AM 6 points [-]

Would the comment have been deleted if the author had ROT13'd it?

Yes.

Comment author: rwallace 10 December 2010 04:10:31PM 8 points [-]

Upvoted - this is an eminently sensible suggestion on how to deal with comments that some people would rather not view because they find the topic upsetting.

waitingforgodel: see, there usually are at least somewhat reasonable ways to deal with this sort of conflict. If you'd reacted to "I can't think of a reasonable way yet" with "I'll keep thinking about it" instead of "I'm going to go off and do something completely loony like pretending to destroy the world" you might have been the one to make this suggestion, or something even better, and you wouldn't be shooting for a record number of (deserved) downvotes.

Comment author: Lightwave 11 December 2010 01:26:16AM *  4 points [-]

I think I'd prefer that discussions of 'toxic' topics happen off-site. If it's all mixed-in with the rest of the comments/discussions it might be too hard to resist reading them. Temptation and curiosity would be too strong when you face threads/comments with a label "Warning. Dangerous ideas ahead. Read at your own risk."

Comment author: Mass_Driver 10 December 2010 08:12:41AM 14 points [-]

Don't you have better things to do than fight a turf war over a blog? Start your own if you think your rules make more sense -- the code is mostly open source.

Comment author: Psychohistorian 10 December 2010 01:28:14PM 5 points [-]

Laws are not comparable to blackmail because they have process behind them. If one loan individual told me that if I didn't wear my seatbelt, he'd bust my kneecaps, then that would be blackmail. Might even qualify as terrorism, since he is trying to constrain my actions by threat of illegitimate force.

A lone individual making a threat against the main moderator of a site if he uses his discretion in a certain way is indeed blackmail/terrorism, particularly when the threat is over a thing substantially outside the purview of the site, and the act threatened is on its own clearly immoral (e.g. it'd be legitimate to threaten leaving the site, or reposting censored material on a separate site). As it stands, it's an attempt to force another's will without any semblance of legitimate authority, which seems to qualify as " clearly wrong."

Comment author: waitingforgodel 10 December 2010 06:02:37PM *  -2 points [-]

If one loan individual told me that if I didn't wear my seatbelt, he'd bust my kneecaps, then that would be blackmail.

I think this is closer to if one lone individual said that every time he saw you not wear a seatbelt (which for some reason a law couldn't get passed for), he'd nudge gun control legislation closer to being enacted (assuming he knew you'd hate gun control legislation)

Comment author: Psychohistorian 10 December 2010 09:25:17PM 6 points [-]

No, it's not. You can't just pretend that the threat is trivial when it's not. "You'd hate gun control legislation" is not an appropriate comparison. The utility hit of nudging up the odds of something I'd hate happening is not directly comparable. Given the circumstances and EY's obvious beliefs, the negative utility value of an FAI is vastly worse.

Comparable would be this: every time he sees me not wear a seatbelt, he rolls 8 dice. If they all come up sixes, he'd hunt down, torture, and murder everyone I know and love. The odds are actually slightly lower, and the negative payoff is vastly smaller in this example, so if anything it's an understatement (though failing to wear a seatbelt is a much less bad thing to do than censoring someone, so perhaps it balances). I think this is pretty clearly improper.

Comment author: prase 10 December 2010 10:22:09AM *  6 points [-]

Not commenting the content (which others have done satisfactorily), the formatting of this post is horrible. There are more white lines than lines of text. I would downvote for that only.

Comment author: [deleted] 10 December 2010 03:13:05PM 5 points [-]

You actually lost me before you even got to the main point, since record companies have good reasons to try to protect their intellectual property and governments have good reasons to institute seat belt laws. By the time I read the angry part I was already in disagreement; everything after that only made it worse.

Comment author: Eliezer_Yudkowsky 10 December 2010 08:29:08AM 4 points [-]

Moved post to Discussion section. Note that user's karma has now dropped below what's necessary to submit to the main site.

Comment author: waitingforgodel 10 December 2010 08:33:05AM 1 point [-]

Also note that it wasn't when I submitted to the main site...

Comment author: Snowyowl 10 December 2010 11:27:20AM 4 points [-]

Good thing too. At the time of writing you'd have lost 110 points of karma for this post, instead of only 11.

Comment author: taw 10 December 2010 11:55:22AM 5 points [-]

Outside view question for anyone with relevant expertise:

It seems to be that lesswrong has some features of early cult (belief that the rest of the world is totally wrong about wide range of subjects, messiah figure, secretive inner circle, mission to save the world etc.). Are ridiculous challenges of group's leadership, met with similarly ridiculous response from it, typical feature of group's gradual transformation into a fully developed cult?

My intuitive guess is yes, but I'm no expert in cults. Anyone has relevant knowledge?

This is outside view question about similar groups, not inside view question about lesswrong itself and why it is/isn't a cult.

In my estimate lesswrong isn't close to the point where such questions would get deleted, but as I said, I'm no expert.

Comment author: David_Gerard 10 December 2010 12:25:56PM *  17 points [-]

I know a thing or two (expert on Scientology, knowledgeable about lesser nasty memetic infections). In my opinion as someone who knows a thing or two about the subject, LW really isn't in danger or the source of danger. It has plenty of weird bits, which set off people's "this person appears to be suffering a damaging memetic infection" alarms ("has Bob joined a cult?"), but it's really not off on crack.

SIAI, I can't comment on. I'd hope enough people there (preferably every single one) are expressly mindful of Every Cause Wants To Be A Cult and of the dangers of small closed groups with confidential knowledge and the aim to achieve something big pulling members toward the cult attractor.

I was chatting with ciphergoth about this last night, while he worked at chipping away my disinterest in signing up for cryonics. I'm actually excessively cautious about new ideas and extremely conservative about changing my mind. I think I've turned myself into Mad Eye Moody when it comes to infectious memes. (At least in paranoia; I'm not bragging about my defences.) On the other hand, this doesn't feel like it's actually hampered my life. On the other other hand, I would not of course know.

Comment author: ata 10 December 2010 07:46:15PM *  13 points [-]

SIAI, I can't comment on. I'd hope enough people there (preferably every single one) are expressly mindful of Every Cause Wants To Be A Cult and of the dangers of small closed groups with confidential knowledge and the aim to achieve something big pulling members toward the cult attractor.

I don't have extensive personal experience with SIAI (spent two weekends at their Visiting Fellows house, attended two meetups there, and talked to plenty of SIAI-affiliated people), but the following have been my impressions:

  • People there are generally expected to have read most of the Sequences... which could be a point for cultishness in some sense, but at least they've all read the Death Spirals & Cult Attractor sequence. :P

  • There's a whole lot of disagreement there. They don't consider that a good thing, of course, but any attempts to resolve disagreement are done by debating, looking at evidence, etc., not by adjusting toward any kind of "party line". I don't know of any beliefs that people there are required or expected to profess (other than basic things like taking seriously the ideas of technological singularity, existential risk, FAI, etc., not because it's an official dogma, but just because if someone doesn't take those seriously it just raises the question of why they're interested in SIAI in the first place).

  • On one occasion, there were some notes on a whiteboard comparing and contrasting Singularitarians and Marxists. Similarities included "[expectation/goal of] big future happy event", "Jews", "atheists", "smart folks". Differences included "popularly popular vs. popularly unpopular". (I'm not sure which was supposed to be the more popular one.) And there was a bit noting that both groups are at risk of fully general counterarguments — Marxists dismissed arguments they didn't like by calling their advocates "counterrevolutionary", and LW-type Singularitarians could do the same with categorical dismissals such as "irrational", "hasn't overcome their biases", etc. Note that I haven't actually observed SIAI people doing that, so I just read that as a precaution.

    (And I don't know who wrote that, or what the context was, so take that as you will; but I don't think it's anything that was supposed to be a secret, because (IIRC) it was still up during one of the meetups, and even if I'm mistaken about that, people come and go pretty freely.)

  • People are pretty critical of Eliezer. Of course, most people there have a great deal of respect and admiration for him, and to some degree, the criticism (which is usually on relatively minor things) is probably partly because people there are making a conscious effort to keep in mind that he's not automatically right, and to keep themselves in "evaluate arguments individually" mode rather than "agree with everything" mode. (See also this comment.)

So yeah, my overall impression is that people there are very mindful that they're near the cult attractor, and intentionally and successfully act so as to resist that.

Comment author: David_Gerard 10 December 2010 08:17:06PM 4 points [-]

So yeah, my overall impression is that people there are very mindful that they're near the cult attractor, and intentionally and successfully act so as to resist that.

Sounds like it more so than any other small group I know of!

Comment author: taw 10 December 2010 12:56:08PM 3 points [-]

I would be surprised if less wrong itself ever developed fully into a cult. I'm not so sure about SIAI, but I guess it will probably just collapse at some point. LW doesn't look like a cult now. But what was Scientology like in its earliest stages?

Is there mostly a single way how groups gradually turn into cults, or does it vary a lot?

My intuition was more about Ayn Rand and objectivists than Scientology, but I don't really know much here. Anybody knows what were early objectivists like?

I didn't put much thought into this, it's just some impressions.

Comment author: jimrandomh 10 December 2010 01:24:06PM *  9 points [-]

Is there mostly a single way how groups gradually turn into cults, or does it vary a lot?

Yes, there is. One of the key features of cults is that they make their members sever all social ties to people outside the cult, so that they lose the safeguard of friends and family who can see what's happening and pull them out if necessary. Sci*****ogy was doing that from the very beginning, and Less Wrong has never done anything like that.

Comment author: David_Gerard 10 December 2010 02:15:12PM *  10 points [-]

Not all, just enough. Weakening their mental ties so they get their social calibration from the small group is the key point. But that's just detail, you've nailed the biggie. Good one.

and Less Wrong has never done anything like that.

SIAI staff will have learnt to think in ways that are hard to calibrate against the outside world (singularitarian ideas, home-brewed decision theories). Also, they're working on a project they think is really important. Also, they have information they can't tell everyone (e.g. things they consider decision-theoretic basilisks). So there's a few untoward forces there. As I said, hope they all have their wits about them.

/me makes mental note to reread piles of stuff on Scientology. I wonder who would be a good consulting expert, i.e. more than me.

Comment author: Anonymous6004 10 December 2010 03:25:21PM 5 points [-]

Not all, just enough. Weakening their mental ties so they get their social calibration from the small group is the key point.

No, it's much more than that. Scientology makes its members cut off communication with their former friends and families entirely. They also have a ritualized training procedure in which an examiner repeatedly tries to provoke them, and they have to avoid producing a detectable response on an "e-meter" (which measures stress response). After doing this for awhile, they learn to remain calm under the most extreme circumstances and not react. And so when Scientology's leaders abuse them in terrible ways and commit horrible crimes, they continue to remain calm and not react.

Cults tear down members' defenses and smash their moral compasses. Less Wrong does the exact opposite.

Comment author: Vaniver 10 December 2010 05:57:14PM *  4 points [-]

Cults tear down members' defenses and smash their moral compasses. Less Wrong does the exact opposite.

What defense against EY does EY strengthen? Because I'm somewhat surprised by the amount I hear Aumann's Agreement Theorem bandied around with regards to what is clearly a mistake on EY's part.

Comment author: David_Gerard 10 December 2010 03:58:26PM *  4 points [-]

I was talking generally, not about Scientology in particular.

As I noted, Scientology is such a toweringly bad idea that it makes other bad ideas seem relatively benign. There are lots of cultish groups that are nowhere near as bad as Scientology, but that doesn't make them just fine. Beware of this error. (Useful way to avoid it: don't use Scientology as a comparison in your reasoning.)

Comment author: TheOtherDave 10 December 2010 04:07:42PM 3 points [-]

But that error isn't nearly as bad as accidentally violating containment procedures when handling virulent pathogens, so really, what is there to worry about?

(ducks)

Comment author: David_Gerard 10 December 2010 04:08:57PM 1 point [-]

The forbidden topic, obviously.

Comment author: taw 10 December 2010 04:44:53PM 3 points [-]

No, it's much more than that. Scientology makes its members cut off communication with their former friends and families entirely.

I'd like to see some solid evidence for or against the claim that typical developing cults make their members cut off communication with their former friends and families entirely.

If the claim is of merely weakening these ties, then this is definitely happening. I especially mean commitment by signing up for cryonics. It will definitely increase mental distance between affected person and their formerly close friends and family, I guess about as much signing up for a weird religion but mostly perceived as benign would. I doubt anyone has much evidence about this demographics?

Comment author: David_Gerard 10 December 2010 05:06:35PM *  5 points [-]

I'd like to see some solid evidence for or against the claim that typical developing cults make their members cut off communication with their former friends and families entirely.

I don't think they necessarily make them - all that's needed is for the person to loosen the ties in their head, and strengthen them to the group.

An example is terrorist cells, which are small groups with a goal who have gone weird together. They may not cut themselves off from their families, but their bad idea has them enough that their social calibrator goes group-focused. I suspect this is part of why people who decompartmentalise toxic waste go funny. (I haven't worked out precisely how to get from the first to the second.)

There are small Christian churches that also go cultish in the same way. Note that in this case, the religious ideas are apparently mainstream - but there's enough weird stuff in the Blble to justify all manner of strangeness.

At some stage cohesion of the group becomes very important, possibly more important than the supposed point of the group. (I'm not sure how to measure that.)

I need to ask some people about this. Unfortunately, the real experts on cult thinking include several of the people currently going wildly idiotic about cryonics on the Rick Ross boards ... an example of overtraining on a bad experience and seeing a pattern where it isn't.

Comment author: taw 10 December 2010 07:42:57PM 6 points [-]

Regardless of actual chances of both working and considering the issue from purely sociological perspective - signing up for cryonics seems to be to be a lot like "accepting Jesus" / born again / or joining some far-more-religious-than-average subgroups of mainstream religions.

In both situations there's some underlying reasonably mainstream meme soup that is more or less accepted (Christianity / strict mind-brain correspondence) but which most people who accept it compartmentalize away. Then some groups decide not to compartmentalize it but accept consequences of their beliefs. It really doesn't take much more than that.

Disclaimers:

I'm probably in some top 25 posters by karma, but I tend to feel like an outsider here a lot.

The only "rationalist" idea from LW canon I take more or less seriously is the outside view, and the outside view says taking ideas too seriously tends to have horrible consequences most of the time. So I cannot even take outside view too seriously, by outside view - and indeed I have totally violated outside view's conclusions on several occasions, after careful consideration and fully aware of what I'm doing. Maybe I should write about it someday.

In my estimate all FAI / AI foom / nonstandard decision theories stuff is nothing but severe compartmentalization failure.

In my estimate cryonics will probably be feasible in some remote future, but right now costs of cryonics (very rarely honestly stated by proponents, backed by serious economic simulations instead of wishful thinking) are far too high and chances of it working now are far too slim to bother. I wouldn't even take it for free, as it would interfere with me being an organ donor, and that has non-negligible value for me. And even without that personal cost of added weirdness would probably be too high relative to my estimate of it working.

I can imagine alternative universes where cryonics makes sense, and I don't think people who take cryonics seriously are insane, I just think wishful thinking biases them. In non-zero but as far as I can tell very very tiny portion of possible future universes where cryonics turned out to work, well, enjoy your second life.

By the way, is there any reason for me to write articles expanding my points, or not really?

Comment author: multifoliaterose 10 December 2010 09:03:30PM *  3 points [-]

I'm probably in some top 25 posters by karma, but I tend to feel like an outsider here a lot.

My own situation is not so different although

(a) I have lower karma than you and

(b) There are some LW posters with whom I feel strong affinity

By the way, is there any reason for me to write articles expanding my points, or not really?

I myself am curious and would read what you had to say with interest and this is a weak indication that others would but of course it's for you to say whether it would be worth the opportunity cost. Probably the community would be more receptive to such pieces if they were cautious & carefully argued than if not; but this takes still more time and effort.

Comment author: [deleted] 10 December 2010 10:21:40PM 1 point [-]

By the way, is there any reason for me to write articles expanding my points, or not really?

I'm just some random lurker, but I'd be very interested in these articles. I share your view on cryonics and would like to read some more clarification on what you mean by "compartmentalization failure" and some examples of a rejection of the outside view.

Comment author: taw 10 December 2010 04:36:57PM *  1 point [-]

Scientology was doing that from the very beginning

Quick reading suggests that Hubbard first founded "dianetics" in late 1949/early 1950, and it became "scientology" only in late 1953/early 1954. As far as I can tell it took them many years until they became Scientology we know. There's some evidence of evaporative cooling at that stage.

And just as David Gerard says, modern Scientology is extreme case. By cult I meant something more like objectivists.

Comment author: David_Gerard 10 December 2010 04:55:27PM *  6 points [-]

The Wikipedia articles on Scientology are pretty good, by the way. (If I say so myself. I started WikiProject Scientology :-) Mostly started by critics but with lots of input from Scientologists, and the Neutral Point Of View turns out to be a fantastically effective way of writing about the stuff - before Wikipedia, there were CoS sites which were friendly and pleasant but rather glaringly incomplete in important ways, and critics' sites which were highly informative but frequently so bitter as to be all but unreadable.

(Despite the key rule of NPOV - write for your opponent - I doubt the CoS is a fan of WP's Scientology articles. Ah well!)

Comment author: David_Gerard 10 December 2010 01:18:25PM *  15 points [-]

I don't have a quick comment-length intro to how cults work. Every Cause Wants To Be A Cult will give you some idea.

Humans have a natural tendency to form close-knit ingroups. This can turn into the cult attractor. If the group starts going a bit weird, evaporative cooling makes it weirder. edit: jimrandomh nailed it: it's isolation from outside social calibration that lets a group go weird.

Predatory infectious memes are mostly not constructed, they evolve. Hence the cult attractor.

Scientology was actually constructed - Hubbard had a keen understanding of human psychology (and no moral compass and no concern as to the difference between truth and falsity, but anyway) and stitched it together entirely from existing components. He started with Dianetics and then he bolted more stuff onto it as he went.

But talking about Scientology is actually not helpful for the question you're asking, because Scientology is the Godwin example of bad infectious memes - it's so bad (one of the most damaging, in terms of how long it takes ex-members to recover - I couldn't quickly find the cite) that it makes lesser nasty cults look really quite benign by comparison. It is literally as if your only example of authoritarianism was Hitler or Pol Pot and casual authoritarianism didn't look that damaging at all compared to that.

Ayn Rand's group turned cultish by evaporative cooling. These days, it's in practice more a case of individual sufferers of memetic infection - someone reads Atlas Shrugged and turns into an annoying crank. It's an example of how impossible it is to talk someone out of a memetic infection that turns them into a crank - they have to get themselves out of it.

Is this helpful?

Comment author: waitingforgodel 11 December 2010 05:57:34AM 3 points [-]

At karma 0 I can't reply to each of you one at a time (rate limited - 10 min per post), so here are my replies in a single large comment:


@JoshuaZ

I would feel differently about nuke designs. As I said in the "why" links, I believe that EY has a bug when it comes to tail risks. This is an attempt to fix that bug.

Basically non-nuke censorship isn't necessary when you use a reddit engine... and Roko's post isn't a nuke.


@rwallace

Yes, though you'd have to say more.


@jaimeastorga2000

Incredible, thanks for the link


@shokwave

Incredible. Where were you two days ago!

After Roko's post on the question of enduring torture to reduce existential risks, I was sure they're must be a SIAI/LWer who was willing to kill for the cause, but no one spoke up. Thanks :p


@Jack

In this case my estimate is a 5% chance that EY wants to spread the censored material, and used censoring for publicity. Therefore spreading the censored material is questionable as a tactic.


@rwallace

Great! Get EY to rot13 posts instead of censoring them.


@Psychohistorian

You can't just pretend that the threat is trivial when it's not.

Fair enough. But you can't pretend that it's illegal when it's not (ie. the torture/murder example you gave).


@katydee

Actually, I just sent an email. Christians/Republicans are killing ??? people for the same reason they blocked stem cell research: stupidity. Also, why you're not including EY in that causal chain is beyond me.


@Lightwave

I think his blackmail declarations either don't cover my precommitment, or they also require him to not obey US laws (which are also threats).

Comment author: Manfred 11 December 2010 06:05:08AM *  10 points [-]

In this case my estimate is a 5% chance that EY wants to spread the censored material, and used censoring for publicity. Therefore spreading the censored material is questionable as a tactic.

Be careful to keep your eye on the ball. This isn't some zero-sum contest of wills, where if EY gets what he wants that's bad. The ball is human welfare, or should be.

Comment author: shokwave 11 December 2010 03:07:01PM 0 points [-]

I was only as serious as you were :P

Comment deleted 10 December 2010 09:04:32PM [-]
Comment deleted 11 December 2010 05:11:17AM *  [-]
Comment author: TheOtherDave 11 December 2010 05:33:20AM 5 points [-]

Of course actual religious believers who accept that doctrine don't usually bite the bullet

I know a number of believers in various "homegrown" faiths who conclude essentially this, actually. That is, they assert that being aware of the spiritual organizing principle of existence without acting on it leaves one worse off than being ignorant of it, and they assert that consequently they refuse to share their knowledge of that principle.

Comment author: waitingforgodel 10 December 2010 06:36:50PM 0 points [-]

The common misunderstanding from these comments is that they didn't click on the "precommitment" link and read the reasons why the precommitment reduced existential risk.

If I ever do this again, I'll make the reasoning more explicit. In the mean time I'm not sure what to do except add this comment, and the edit at the bottom of the article for new readers.

Comment author: rwallace 10 December 2010 09:22:42PM 10 points [-]

If I observe that I did read the thread to which you refer, and I still think your current course of action is stupid and crazy (and that's coming from someone who agrees with you about the censorship in question being wrong!) will that change your opinion even slightly?

Comment author: HughRistik 11 December 2010 12:01:57AM 6 points [-]

I did read the original precommitment discussions. I thought your original threat was non-serious, and presented as an interesting thought experiment. I was with you on the subject of anti-censorship. When I discovered that your precommitment was serious, you lost the moral high-ground in my eyes, and entered territory where I will not follow.

Comment author: Will_Sawin 12 December 2010 03:24:16AM -1 points [-]

Instead of trying to convince right wingers to ban FAI, how about trying to convince Peter Thiel to defund SIAI proportional to the number of comments in a certain period of time.

Advantages:

  1. Better [incentive to Eliezer]/[increase in existential risk as estimated by waitingforgodel] ratio

  2. Reversible if an equitable agreement is reached.

  3. Smaller risk increase, as the problem warrants.

Comment author: waitingforgodel 12 December 2010 08:40:39AM *  2 points [-]

It's interesting, but I don't see any similarly high-effectiveness ways to influence Peter Thiel... Republicans already want to do high x-risk things, Thiel doesn't already want to decrease funding.