FormallyknownasRoko comments on Best career models for doing research? - Less Wrong

27 Post author: Kaj_Sotala 07 December 2010 04:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (999)

You are viewing a single comment's thread. Show more comments above.

Comment author: FormallyknownasRoko 10 December 2010 05:34:09PM *  0 points [-]

Whatever man, go ahead and make your excuses, you have been warned.

Comment author: Vaniver 10 December 2010 05:41:37PM 8 points [-]

I have not only been warned, but I have stared the basilisk in the eyes, and I'm still here typing about it. In fact, I have only cared enough to do so because it was banned, and I wanted the information on how dangerous it was to judge the wisdom of the censorship.

On a more general note, being terrified of very unlikely terrible events is a known human failure mode. Perhaps it would be more effective at improving human rationality to expose people to ideas like this with the sole purpose of overcoming that sort of terror?

Comment author: Jack 10 December 2010 06:31:19PM *  5 points [-]

I'll just second that I also read it a while back (though after it was censored) and thought that it was quite interesting but wrong on multiple levels. Not 'probably wrong' but wrong like an invalid logic proof is wrong (though of course I am not 100% certain of anything). My main concern about the censorship is that not talking about what was wrong with the argument will allow the proliferation of the reasoning errors that left people thinking the conclusion was plausible. There is a kind of self-fulfilling prophesy involved in not recognizing these errors which is particularly worrying.

Comment author: JGWeissman 11 December 2010 01:58:40AM 7 points [-]

Consider this invalid proof that 1 = 2:

1. Let x = y
2. x^2 = x*y
3. x^2 - y^2 = x*y - y^2
4. (x - y)*(x + y) = y*(x - y)
5. x + y = y
6. y + y = y (substitute using 1)
7. 2y = y
8. 2 = 1

You could refute this by pointing out that step (5) involved division by (x - y) = (y - y) = 0, and you can't divide by 0.

But imagine if someone claimed that the proof is invalid because "you can't represent numbers with letters like 'x' and 'y'". You would think that they don't understand what is actually wrong with it, or why someone might mistakenly believe it. This is basically my reaction to everyone I have seen oppose the censorship because of some argument they present that the idea is wrong and no one would believe it.

Comment author: Jack 11 December 2010 03:11:11AM *  2 points [-]

I'm actually not sure if I understand your point. Either it is a round-about way of making it or I'm totally dense and the idea really is dangerous (or some third option).

It's not that the idea is wrong and no one would believe it, it's that the idea is wrong and when presented with with the explanation for why it's wrong no one should believe it. In addition, it's kind of important that people understand why it's wrong. I'm sympathetic to people with different minds that might have adverse reactions to things I don't but the solution to that is to warn them off, not censor the topics entirely.

Comment author: JGWeissman 11 December 2010 03:26:39AM 1 point [-]

Yes, the idea really is dangerous.

it's that the idea is wrong and when presented with with the explanation for why it's wrong no one should believe it.

And for those who understand the idea, but not why it is wrong, nor the explanation of why it is wrong?

the solution to that is to warn them off, not censor the topics entirely.

This is a politically reinforced heuristic that does not work for this problem.

Comment author: XiXiDu 11 December 2010 12:12:35PM *  6 points [-]

This is a politically reinforced heuristic that does not work for this problem.

Transparency is very important regarding people and organisations in powerful and unique positions. The way they act and what they claim in public is weak evidence in support of their honesty. To claim that they have to censor certain information in the name of the greater public good, and to fortify the decision based on their public reputation, does bear no evidence about their true objectives. The only way to solve this issue is by means of transparency.

Surely transparency might have negative consequences, but it mustn't and can outweigh the potential risks from just believing that certain people are telling the truth and do not engage in deception to follow through on their true objectives.

There is also nothing that Yudkowsky has ever achieved that would sufficiently prove his superior intellect that would in turn justify people to just believe him about some extraordinary claim.

Comment author: JGWeissman 11 December 2010 05:49:15PM 1 point [-]

When I say something is a misapplied politically reinforced heuristic, you only reinforce my point by making fully general political arguments that it is always right.

Censorship is not the most evil thing in the universe. The consequences of transparency are allowed to be worse than censorship. Deal with it.

Comment author: XiXiDu 11 December 2010 07:08:52PM 3 points [-]

When I say something is a misapplied politically reinforced heuristic, you only reinforce my point by making fully general political arguments that it is always right.

I already had Anna Salamon telling me something about politics. You sound as incomprehensible to me. Sorry, not meant as an attack.

Censorship is not the most evil thing in the universe. The consequences of transparency are allowed to be worse than censorship. Deal with it.

I stated several times in the past that I am completely in favor of censorship, I have no idea why you are telling me this.

Comment author: jimrandomh 11 December 2010 09:13:06PM 3 points [-]

Our rules and intuitions about free speech and censorship are based on the types of censorship we usually see in practice. Ordinarily, if someone is trying to censor a piece of information, then that information falls into one of two categories: either it's information that would weaken them politically, by making others less likely to support them and more likely to support their opponents, or it's information that would enable people to do something that they don't want done.

People often try to censor information that makes people less likely to support them, and more likely to support their opponents. For example, many governments try to censor embarrassing facts ("the Purple Party takes bribes and kicks puppies!"), the fact that opposition exists ("the Pink Party will stop the puppy-kicking!") and its strength ("you can join the Pink Party, there are 10^4 of us already!"), and organization of opposition ("the Pink Party rally is tomorrow!"). This is most obvious with political parties, but it happens anywhere people feel like there are "sides" - with religions (censorship of "blasphemy") and with public policies (censoring climate change studies, reports from the Iraq and Afghan wars). Allowing censorship in this category is bad because it enables corruption, and leaves less-worthy groups in charge.

The second common instance of censorship is encouragement and instructions for doing things that certain people don't want done. Examples include cryptography, how to break DRM, pornography, and bomb-making recipes. Banning these is bad if the capability is suppressed for a bad reason (cryptography enables dissent), if it's entangled with other things (general-purpose chemistry applies to explosives), or if it requires infrastructure that can also be used for the first type of censorship (porn filters have been caught blocking politicians' campaign sites).

These two cases cover 99.99% of the things we call "censorship", and within these two categories, censorship is definitely bad, and usually worth opposing. It is normally safe to assume that if something is being censored, it is for one of these two reasons. There are gray areas - slander (when the speaker knows he's lying and has malicious intent), and bomb-making recipes (when they're advertised as such and not general-purpose chemistry), for example - but the law has the exceptions mapped out pretty accurately. (Slander gets you sued, bomb-making recipes get you surveilled.) This makes a solid foundation for the principle that censorship should be opposed.

However, that principle and the analysis supporting it apply only to censorship that falls within these two domains. When things fall outside these categories, we usually don't call them censorship; for example, there is a widespread conspiracy among email and web site administrators to suppress ads for Viagra, but we don't call that censorship, even though it meets every aspect of the definition except motive. If you happen to find a weird instance of censorship which doesn't fall into either category, then you have to start over and derive an answer to whether censorship in that particular case is good or bad, from scratch, without resorting to generalities about censorship-in-general. Some of the arguments may still apply - for example, building a censorship-technology infrastructure is bad even if it's only meant to be used on spam - but not all of them, and not with the same force.

If the usual arguments against censorship don't apply, and we're trying to figure out whether to censor it, the next two things to test are whether it's true, and whether an informed reader would want to see it. If both of these conditions hold, then it should not be censored. However, if either condition fails to hold, then it's okay to censor.

Either the forbidden post is false, in which case it does not deserve protection because it's false, or it's true, in which case it should be censored because no informed person should want to see it. In either case, people spreading it are doing a bad thing.

Comment author: JGWeissman 11 December 2010 07:21:23PM -2 points [-]

I stated several times in the past that I am completely in favor of censorship, I have no idea why you are telling me this.

Your comment that I am replying too is often way more salient than things you have said in the past that I may or may not have observed.

Comment deleted 11 December 2010 06:04:46AM [-]
Comment author: Vaniver 11 December 2010 05:12:29PM 2 points [-]

For those curious: we do agree, but he went to quite a bit more effort in showing that than I did (and is similarly more convincing).

Comment author: Vladimir_Nesov 10 December 2010 05:53:19PM 2 points [-]

I have not only been warned, but I have stared the basilisk in the eyes, and I'm still here typing about it.

This isn't evidence about that hypothesis, it's expected that most certainly nothing happens. Yet you write for rhetorical purposes as if it's supposed to be evidence against the hypothesis. This constitutes either lying or confusion (I expect it's unintentional lying, with phrases produced without conscious reflection about their meaning, so a little of both lying and confusion).

Comment author: Jack 10 December 2010 06:05:56PM 5 points [-]

The sentence of Vaniver's you quote seems like a straight forward case of responding to hyperbole with hyperbole in kind.

Comment author: Vladimir_Nesov 10 December 2010 06:11:23PM 1 point [-]

That won't be as bad-intentioned, but still as wrong and deceptive.

Comment author: shokwave 10 December 2010 06:10:49PM 2 points [-]

I have not only been warned, but I have stared the basilisk in the eyes, and I'm still here typing about it.

The point we are trying to make is that we think the people who stared the basilisk in the eyes and metaphorically turned to stone are stronger evidence.

Comment author: Vaniver 10 December 2010 06:32:13PM 8 points [-]

The point we are trying to make is that we think the people who stared the basilisk in the eyes and metaphorically turned to stone are stronger evidence.

I get that. But I think it's important to consider both positive and negative evidence- if someone's testimony that they got turned to stone is important, so are the testimonies of people who didn't get turned to stone.

The question to me is whether the basilisk turns people to stone or people turn themselves into stone. I prefer the second because it requires no magic powers on the part of the basilisk. It might be that some people turn to stone when they see goatse for the first time, but that tells you more about humans and how they respond to shock than about goatse.

Indeed, that makes it somewhat useful to know what sort of things shock other people. Calling this idea 'dangerous' instead of 'dangerous to EY" strikes me as mind projection.

Comment author: shokwave 10 December 2010 07:14:37PM 1 point [-]

But I think it's important to consider both positive and negative evidence- if someone's testimony that they got turned to stone is important, so are the testimonies of people who didn't get turned to stone.

I am considering both.

It might be that some people turn to stone when they see goatse for the first time, but that tells you more about humans and how they respond to shock than about goatse.

I generally find myself in support of people who advocate a policy of keeping people from seeing Goatse.

Comment author: Vaniver 11 December 2010 12:35:39AM 3 points [-]

I generally find myself in support of people who advocate a policy of keeping people from seeing Goatse.

I'm not sure how to evaluate this statement. What do you mean by "keeping people from seeing Goatse"? Banning? Voluntarily choosing not to spread it? A filter like the one proposed in Australia that checks every request to the outside world?

Comment author: shokwave 11 December 2010 07:15:13AM 1 point [-]

Censoring posts that display Goatse on LessWrong.

Generally, censoring posts that display Goatse on non-Goatse websites.

Comment author: Vaniver 11 December 2010 04:28:26PM 7 points [-]

I am much more sympathetic to "keeping goatse off of site X" than "keeping people from seeing goatse," and so that's a reasonable policy. If your site is about posting pictures of cute kittens, then goatse is not a picture of a cute kitten.

However, it seems to me that suspected Langford basilisks are part of the material of LessWrong. Imagine someone posted in the discussion "hey guys, I really want to be an atheist but I can't stop worrying about whether or not the Rapture will happen, and if it does life will suck." It seems to me that we would have a lot to say to them about how they could approach the situation more rationally.

And, if Langford basilisks exist, religion has found them. Someone got a nightmare because of Roko's idea, but people fainted upon hearing Sinners in the Hands of an Angry God. Why are we not looking for the Perseus for this Medusa? If rationality is like an immune system, and we're interested in refining our rationality, we ought to be looking for antibodies.

Comment author: shokwave 11 December 2010 04:56:26PM 1 point [-]

However, it seems to me that suspected Langford basilisks are part of the material of LessWrong.

It seems to me that Eliezer's response as moderator of LessWrong strongly implies that he does not believe this is the case. Your goal, then, would be to convince Eliezer that it ought to be part of the LessWrong syllabus, as it were. Cialdini's Influence and other texts would probably advise you to work within his restrictions and conform to his desires as much as practical - on a site like LessWrong, though, I am not sure how applicable the advice would be, and in any case I don't mean to be prescriptive about it.

Comment author: Vaniver 11 December 2010 05:10:40PM 1 point [-]

Your goal, then, would be to convince Eliezer that it ought to be part of the LessWrong syllabus, as it were.

Right. I see a few paths to do that that may work (and no, holding the future hostage is not one of them).

Comment author: katydee 11 December 2010 08:02:22AM 2 points [-]

Is Goatse supposed to be a big deal? Someone showed it to me and I literally said "who cares?"

Comment author: wedrifid 11 December 2010 08:25:25AM 1 point [-]

Is Goatse supposed to be a big deal? Someone showed it to me and I literally said "who cares?"

I totally agree. There are far more important internet requests that my (Australian) government should be trying to filter. Priorities people!

Comment author: shokwave 11 December 2010 12:00:00PM 0 points [-]
Comment author: katydee 11 December 2010 06:28:07PM 1 point [-]

I feel like reaction videos are biased towards people who have funny or dramatic reactions, but point taken.

Comment author: Vladimir_Nesov 10 December 2010 06:14:13PM 1 point [-]

I don't understand this. (Play on conservation of expected evidence? In what way?)

Comment author: shokwave 10 December 2010 06:30:56PM 4 points [-]

Normal updating.

  • Original prior for basilik-danger.
  • Eliezer_Yudkowsky stares at basilisk, turns to stone (read: engages idea, decides to censor). Revise pr(basilisk-danger) upwards.
  • FormallyknownasRoko stares at basilisk, turns to stone (read: appears to truly wish e had never thought it). Revise pr(basilisk-danger) upwards.
  • Vladimir_Nesov stares at basilisk, turns to stone (read: engages idea, decides it is dangerous). Revise pr(basilisk-danger) upwards.
  • Vaniver stares at basilisk, is unharmed (read: engages idea, decides it is not dangerous). Revise pr(basilisk-danger) downwards.
  • Posterior is higher than original prior.

For the posterior to equal or lower than the prior, Vaniver would have to be more a rationalist than Eliezer, Roko, and you put together.

Comment author: Jack 10 December 2010 07:16:46PM 7 points [-]

Okay, but more than four people have engaged with the idea. Should we take a poll?

The problem of course is that majorities often believe stupid things. That is why a free marketplace of ideas free from censorship is a really good thing! The obvious thing to do is exchange information until agreement but we can't do that, at least not here.

Also, the people who think it should be censored all seem to disagree about how dangerous the idea really is, suggesting it isn't clear how it is dangerous. It also seems plausible that some people have influenced the thinking of other people- for example it looks like Roko regretted posting after talking to Eliezer. While Roko's regret is evidence that Eliezer is right, it isn't the same as independent/blind confirmation that the idea is dangerous.

Comment author: shokwave 10 December 2010 07:36:23PM -2 points [-]

The problem of course is that majorities often believe stupid things.

When you give all agents equal weight, sure. Without taking a poll of anything except my memory, Eliezer+Roko+VladNesov+Alicorn are against, DavidGerard+waitingforgodel+vaniver are for. Others are more sidelined than supporting a particular side.

The obvious thing to do is exchange information until agreement but we can't do that, at least not here.

Aumann agreement works in the case of hidden information - all you need are posteriors and common knowledge of the event alone.

While Roko's regret is evidence that Eliezer is right, it isn't the same as independent/blind confirmation that the idea is dangerous.

Roko increased his estimation and Eliezer decreased his estimation - and the amounts they did so are balanced according to the strength of their private signals. Looking at two Aumann-agreed conclusions gives you the same evidence as looking that the pre-Aumann (differing) conclusions - the same way that 10, 10 gives you the same average as 5, 15.

Comment author: TheOtherDave 10 December 2010 08:33:58PM 7 points [-]

Others are more sidelined than supporting a particular side.

I would prefer you not treat people avoiding a discussion as evidence that people don't differentially evaluate the assertions made in that discussion.

Doing so creates a perverse incentive whereby chiming in to say "me too!" starts to feel like a valuable service, which would likely chase me off the site altogether. (Similar concerns apply to upvoting comments I agree with but don't want to see more of.)

If you are seriously interested in data about how many people believe or disbelieve certain propositions, there exist techniques for gathering that data that are more reliable than speculating.

If you aren't interested, you could just not bring it up.

Comment author: shokwave 10 December 2010 08:45:29PM 0 points [-]

I would prefer you not treat people avoiding a discussion as evidence that people don't differentially evaluate the assertions made in that discussion.

I treat them as not having given me evidence either way. I honestly don't know how I could treat them otherwise.

Comment author: wedrifid 11 December 2010 08:34:43AM 1 point [-]

I treat them as not having given me evidence either way. I honestly don't know how I could treat them otherwise.

It is extremely hard to give no evidence by making a decision, even a decision to do nothing.

Comment author: TheOtherDave 10 December 2010 09:03:21PM 0 points [-]

The sentence I quoted sounded to me as though you were treating those of us who've remained "sidelined" as evidence of something. But if you were instead just bringing us up as an example of something that provides no evidence of anything, and if that was clear to everyone else, then I'm content.

Comment author: Jack 10 December 2010 08:00:24PM *  3 points [-]

Without taking a poll of anything except my memory, Eliezer+Roko+VladNesov+Alicorn are against, DavidGerard+waitingforgodel+vaniver are for.

I'm for. I believe Tim Tyler is for.

Aumann agreement works in the case of hidden information - all you need are posteriors and common knowledge of the event alone.

Human's have this unfortunate feature of not being logically omniscient. In such cases where people don't see all the logical implications of an argument we can treat those implications as hidden information. If this wasn't the case then the censorship would be totally unnecessary as Roko's argument didn't actually include new information. We would have all turned to stone already.

Roko increased his estimation and Eliezer decreased his estimation - and the amounts they did so are balanced according to the strength of their private signals.

There is no way for you to have accurately assessed this. Roko and Eliezer aren't idealized Bayesian agents, it is extremely unlikely they performed a perfect Aumann agreement. If one is more persuasive than the other for reasons other than the evidence they share than their combined support for the proposition may not be worth the same as two people who independently came to support the proposition. Besides which, according to you, what information did they share exactly?

Comment author: FormallyknownasRoko 10 December 2010 08:05:01PM *  2 points [-]

I had a private email conversation with Eliezer that did involve a process of logical discourse, and another with Carl.

Also, when I posted the material, I hadn't thought it through. One I had thought it through, I realized that I had accidentally said more than I should have done.

Comment author: shokwave 10 December 2010 08:35:09PM *  0 points [-]

David_Gerard, Jack, timtyler, waitingforgodel, and Vaniver do not currently outweigh Eliezer_Yudkowsky, FormallyknownasRoko, Vladimir_Nesov, and Alicorn, as of now, in my mind.

It does not need to be a perfect Aumann agreement; a merely good one will still reduce the chances of overcounting or undercounting either side's evidence well below the acceptable limits.

There is no way for you to have accurately assessed this. Roko and Eliezer aren't idealized Bayesian agents, it is extremely unlikely they performed a perfect Aumann agreement.

They are approximations of Bayesian agents, and it is extremely likely they performed an approximate Aumann agreement.

To settle this particular question, however, I will pay money. I promise to donate 50 dollars to the Singularity Institute for Artificial Intelligence, independent of other plans to donate, if Eliezer confirms that he did revise his estimate down; or if he confirms that he did not revise his estimate down. Payable within two weeks of Eliezer's comment.

Comment author: TheOtherDave 10 December 2010 08:42:45PM *  1 point [-]

I'm curious: if he confirms instead that the change in his estimate, if there was one, was small enough relative to his estimate that he can't reliably detect it or detect its absence, although he infers that he updated using more or less the same reasoning you use above, will you donate or not?

Comment author: Vaniver 10 December 2010 07:00:45PM 4 points [-]

For the posterior to equal or lower than the prior, Vaniver would have to be more a rationalist than Eliezer, Roko, and you put together.

How many of me would there have to be for that to work?

Also, why is rationalism the risk factor for this basilisk? Maybe the basilisk only turns to stone people with brown eyes (or the appropriate mental analog).

Comment author: shokwave 10 December 2010 07:25:11PM *  0 points [-]

How many of me would there have to be for that to work?

Only one; I meant 'you' in that line to refer to Vlad. It does raise the question "how many people disagree before I side with them instead of Eliezer/Roko/Vlad". And the answer to that is ... complicated. Each person's rationality, modified by how much it was applied in this particular case, is the weight I give to their evidence; then the full calculation of evidence for and against should bring my prior to within epsilon but preferably below my original prior for me to decide the idea is safe.

Also, why is rationalism the risk factor for this basilisk?

Rationalism is the ability to think well and this is a dangerous idea. If it were a dangerous bacterium then immune system would be the risk factor.

Comment author: Vaniver 11 December 2010 01:05:23AM 2 points [-]

Rationalism is the ability to think well and this is a dangerous idea. If it were a dangerous bacterium then immune system would be the risk factor.

Generally, if your immune system is fighting something, you're already sick. Most pathogens are benign or don't have the keys to your locks. This might be a similar situation- the idea is only troubling if your lock fits it- and it seems like then there would be rational methods to erode that fear (like the immune system mobs an infection).

Comment author: shokwave 11 December 2010 07:26:33AM 0 points [-]

The analogy definitely breaks down, doesn't it? What I had in mind was Eliezer, Roko, and Vlad saying "I got sick from this infection" and you saying "I did not get sick from this infection" - I would look at how strong each person's immune system is.

So if Eliezer, Roko, and Vlad all had weak immune systems and yours was quite robust, I would conclude that the bacterium in question is not particularly virulent. But if three robust immune systems all fell sick, and one robust immune system did not, I would be forced to decide between some hypotheses:

  • the first three are actually weak immune systems
  • the fourth was not properly exposed to the bacterium
  • the fourth has a condition that makes it immune
  • the bacterium is not virulent, the first three got unlucky

On the evidence I have, the middle two seem more likely than the first and last hypotheses.

Comment author: Vaniver 11 December 2010 04:18:31PM 1 point [-]

I agree- my money is on #3 (but I'm not sure whether I would structure is as "fourth is immune" or "first three are vulnerable"- both are correct, but which is more natural word to use depends on the demographic response).

Comment author: David_Gerard 10 December 2010 07:58:43PM 2 points [-]

Rationalism is the ability to think well and this is a dangerous idea. If it were a dangerous bacterium then immune system would be the risk factor.

Er, are you describing rationalism (I note you say that and not "rationality") as susceptible to autoimmune disorders? More so than in this post?

Comment deleted 10 December 2010 06:54:55PM [-]
Comment author: shokwave 10 December 2010 07:03:59PM -1 points [-]

Ensuring that is part of being a rationalist; if EY, Roko, and Vlad (apparently Alicorn as well?) were bad at error-checking and Vaniver was good at it, that would be sufficient to say that Vaniver is a better rationalist than E R V (A?) put together.

Comment author: David_Gerard 10 December 2010 07:36:55PM *  7 points [-]

Certainly. However, error-checking oneself is notoriously less effective than having outsiders do so.

"For the computer security community, the moral is obvious: if you are designing a system whose functions include providing evidence, it had better be able to withstand hostile review." - Ross Anderson, RISKS Digest vol 18 no 25

Until a clever new thing has had decent outside review, it just doesn't count as knowledge yet.

Comment author: shokwave 10 December 2010 07:46:44PM -2 points [-]

Certainly. However, error-checking oneself is notoriously less effective than having outsiders do so.

That Eliezer wrote the Sequences and appears to think according to their rules and is aware of Löb's Theorem is strong evidence that he is good at error-checking himself.

Comment author: David_Gerard 10 December 2010 08:02:24PM *  3 points [-]

That Eliezer wrote the Sequences and appears to think according to their rules and is aware of Löb's Theorem is strong evidence that he is good at error-checking himself.

That's pretty much a circular argument. How's the third-party verifiable evidence look?

Comment author: Manfred 10 December 2010 07:30:36PM 3 points [-]

I haven't read fluffy (I have named it fluffy), but I'd guess it's an equivalent of a virus in a monoculture: every mode of thought has its blind spots, and so to trick respectable people on LW, you only need an idea that sits in the right blind spots. No need for general properties like "only infectious to stupid people."

Alicorn throws a bit of a wrench in this, as I don't think she shares as many blind spots with the others you mention, but it's still entirely possible. This also explains the apparent resistance of outsiders, without need for Eliezer to be lying when he says he thinks fluffy was wrong.

Comment author: shokwave 10 December 2010 07:48:13PM 0 points [-]

Could also be that outsiders are resistant because they have blind spots where the idea is infectious, and respectable people on LW are respected because they do not have the blind spots - and so are infected.

I think these two views are actually the same, stated as inverses of each other. The term blind spot is problematic.

Comment author: Manfred 10 December 2010 08:02:34PM 0 points [-]

I think the term blind spot is accurate, unless (and I doubt it) Eliezer was lying when he later said fluffy was wrong. What fits the bill isn't a correct scary idea, but merely a scary idea that fits into what the reader already thinks.

Maybe fluffy is a correct scary idea, and your allocation of blind spots (or discouraging of the use of the term) is correct, but secondhand evidence points towards fluffy being incorrect but scary to some people.

Comment author: Alicorn 10 December 2010 07:34:38PM 0 points [-]

Alicorn throws a bit of a wrench in this, as I don't think she shares as many blind spots with the others you mention

I'm curious about why you think this.

Comment author: Manfred 10 December 2010 07:50:47PM 2 points [-]

Honestly? Doesn't like to argue about quantum mechanics. That I've seen :D Your posts seem to be about noticing where things fit into narratives, or introspection, or things other than esoteric decision theory speculations. If I had to come up with an idea that would trick Eliezer and Vladimir N into thinking it was dangerous, it would probably be barely plausible decision theory with a dash of many worlds.

Comment author: Jack 10 December 2010 08:34:28PM 0 points [-]

I was also surprised by your reaction to the the argument. In my case this was due to the opinions you've expressed on normative ethics.

Comment author: Vladimir_Nesov 10 December 2010 06:58:25PM *  2 points [-]

Eliezer_Yudkowsky stares at basilisk, turns to stone (read: engages idea, decides to censor). Revise pr(basilisk-danger) upwards.

This equivocates the intended meaning of turning to stone in the original discussion you replied to. Fail. (But I understand what you meant now.)

Comment author: shokwave 10 December 2010 07:06:26PM 1 point [-]

Sorry, I should not have included censoring specifically. Change the "read:"s to 'engages, reacts negatively', 'engages, does not react negatively' and the argument still functions.

Comment author: Vladimir_Nesov 10 December 2010 07:09:29PM *  2 points [-]

The argument does seem to function, but you shouldn't have used the term in a sense conflicting with intended.

Comment author: TheOtherDave 10 December 2010 05:46:01PM 1 point [-]

Perhaps it would be more effective at improving human rationality to expose people to ideas like this with the sole purpose of overcoming that sort of terror?

You would need a mechanism for actually encouraging them to "overcome" the terror, rather than reinforce it. Otherwise you might find that your subjects are less rational after this process than they were before.

Comment author: Vaniver 10 December 2010 06:09:40PM 0 points [-]

Right- and current methodologies when it comes to that sort of therapy are better done in person than over the internet.

Comment author: FormallyknownasRoko 10 December 2010 05:58:02PM 0 points [-]

being terrified of very unlikely terrible events is a known human failure mode

one wonders how something like that might have evolved, doesn't one? What happened to all the humans who came with the mutation that made them want to find out whether the sabre-toothed tiger was friendly?

Comment author: Kingreaper 10 December 2010 06:29:21PM *  7 points [-]

one wonders how something like that might have evolved, doesn't one? What happened to all the humans who came with the mutation that made them want to find out whether the sabre-toothed tiger was friendly?

I don't see how very unlikely events that people knew the probability of would have been part of the evolutionary environment at all.

In fact, I would posit that the bias is most likely due to having a very high floor for probability. In the evolutionary environment things with probability you knew to be <1% would be unlikely to ever be brought to your attention. So not having any good method for intuitively handling probabilities between 1% and zero would be expected.

In fact, I don't think I have an innate handle on probability to any finer grain than ~10% increments. Anything more than that seems to require mathematical thought.

Comment author: FormallyknownasRoko 10 December 2010 06:32:22PM 0 points [-]

Probably less than 1% of cave-men died by actively seeking out the sabre-toothed tiger to see if it was friendly. But I digress.

Comment author: Kingreaper 10 December 2010 06:34:48PM *  8 points [-]

But probably far more than 1% of cave-men who chose to seek out a sabre-tooth tiger to see if they were friendly died due to doing so.

The relevant question on an issue of personal safety isn't "What % of the population die due to trying this?"

The relevant question is: "What % of the people who try this will die?"

In the first case, rollerskating downhill, while on fire, after having taken arsenic would seem safe (as I suspect no-one has ever done precisely that)

Comment author: Vaniver 10 December 2010 06:08:32PM 7 points [-]

one wonders how something like that might have evolved, doesn't one?

No, really, one doesn't wonder. It's pretty obvious. But if we've gotten to the point where "this bias paid off in the evolutionary environment!" is actually used as an argument, then we are off the rails of refining human rationality.

Comment author: FormallyknownasRoko 10 December 2010 06:17:43PM *  2 points [-]

What's wrong with using "this bias paid off in the evolutionary environment!" as an argument? I think people who paid more attention to this might make fewer mistakes, especially in domains where there isn't a systematic, exploitable difference between EEA and now.

The evolutionary environment contained enetities capable of dishing out severe punishments, unertainty, etc.

If anything, I think that the heuristic that an idea "obviously" can't be dangerous is the problem, not the heuristic that one should take care around possibilities of strong penalites.

Comment author: timtyler 10 December 2010 06:25:01PM *  4 points [-]

It is a fine argument for explaining the widespread occcurrence of fear. However, today humans are in an environment where their primitive paranoia is frequently triggered by inappropriate stimulii.

Dan Gardener goes into this in some detail in his book: Risk: The Science and Politics of Fear

Video of Dan discussing the topic: Author Daniel Gardner says Americans are the healthiest and safest humans in the world, but are irrationally plagued by fear. He talks with Maggie Rodriguez about his book 'The Science Of Fear.'

Comment author: Desrtopa 10 December 2010 06:34:41PM 0 points [-]

He says "we" are the healthiest and safest humans ever to live, but I'm very skeptical that this refers specifically to Americans rather than present day first world nation citizens in general.

Comment author: FormallyknownasRoko 10 December 2010 06:29:00PM *  0 points [-]

Yes, we are, in fact, safer than in the EEA, in contemporary USA.

But still, there are some real places where danger is real, like the Bronx or scientology or organized crime or a walking across a freeway. So, don't go rubbishing the heuristic of being frightened of potentially real danger.

I think it would only be legitimate to criticize fear itself on "outside view" grounds if we lived in a world with very little actual danger, which is not at all the case.

Comment author: Vaniver 10 December 2010 06:51:53PM 3 points [-]

But still, there are some real places where danger is real, like the Bronx or scientology or organized crime or a walking across a freeway.

So, this may be a good way to approach the issue: loss to individual humans is, roughly speaking, finite. Thus, the correct approach to fear is to gauge risks by their chance of loss, and then discount if it's not fatal.

So, we should be much less worried by a 1e-6 risk than a 1e-4 risk, and a 1e-4 risk than a 1e-2 risk. If you are more scared by a 1e-6 risk than a 1e-2 risk, you're reasoning fallaciously.

Now, one might respond- "but wait! This 1e-6 risk is 1e5 times worse than the 1e-2 risk!". But that seems to fall into the traps of visibility bias and privileging the hypothesis. If you're considering a 1e-6 risk, have you worked out not just all the higher order risks, but also all of the lower order risks that might have higher order impact? And so when you have an idea like the one in question, which I would give a risk of 1e-20 for discussion's sake, and you consider it without also bringing into your calculus essentially every other risk possible, you're not doing it rigorously. And, of course, humans can't do that computation.

Now, the kicker here is that we're talking about fear. I might fear the loss of every person I know just as strongly as I fear the loss of every person that exists, but be willing to do more to prevent the loss of everyone that exists (because that loss is actually larger). Fear has psychological ramifications, not decision-theoretic ones. If this idea has 1e-20 chances of coming to pass, you can ignore it on a fear level, and if you aren't, then I'm willing to consider that evidence you need help coping with fear.

Comment author: timtyler 10 December 2010 06:36:36PM *  1 point [-]

I have a healthy respect for the adaptive aspects of fear. However, we do need an explanation for the scale and prevalence of irrational paranoia.

The picture of an ancestral water hole surrounded by predators helps us to understand the origins of the phenomenon. The ancestral environment was a dangerous and nasty place where people led short, brutish lives. There, living in constant fear made sense.

Comment author: Emile 10 December 2010 07:08:18PM *  3 points [-]

Someone's been reading Terry Pratchett.

He always held that panic was the best means of survival. Back in the old days, his theory went, people faced with hungry sabre-toothed tigers could be divided into those who panicked and those who stood there saying, "What a magnificent brute!" or "Here pussy".