shokwave comments on Best career models for doing research? - Less Wrong

27 Post author: Kaj_Sotala 07 December 2010 04:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (999)

You are viewing a single comment's thread. Show more comments above.

Comment author: shokwave 10 December 2010 06:10:49PM 2 points [-]

I have not only been warned, but I have stared the basilisk in the eyes, and I'm still here typing about it.

The point we are trying to make is that we think the people who stared the basilisk in the eyes and metaphorically turned to stone are stronger evidence.

Comment author: Vaniver 10 December 2010 06:32:13PM 8 points [-]

The point we are trying to make is that we think the people who stared the basilisk in the eyes and metaphorically turned to stone are stronger evidence.

I get that. But I think it's important to consider both positive and negative evidence- if someone's testimony that they got turned to stone is important, so are the testimonies of people who didn't get turned to stone.

The question to me is whether the basilisk turns people to stone or people turn themselves into stone. I prefer the second because it requires no magic powers on the part of the basilisk. It might be that some people turn to stone when they see goatse for the first time, but that tells you more about humans and how they respond to shock than about goatse.

Indeed, that makes it somewhat useful to know what sort of things shock other people. Calling this idea 'dangerous' instead of 'dangerous to EY" strikes me as mind projection.

Comment author: shokwave 10 December 2010 07:14:37PM 1 point [-]

But I think it's important to consider both positive and negative evidence- if someone's testimony that they got turned to stone is important, so are the testimonies of people who didn't get turned to stone.

I am considering both.

It might be that some people turn to stone when they see goatse for the first time, but that tells you more about humans and how they respond to shock than about goatse.

I generally find myself in support of people who advocate a policy of keeping people from seeing Goatse.

Comment author: Vaniver 11 December 2010 12:35:39AM 3 points [-]

I generally find myself in support of people who advocate a policy of keeping people from seeing Goatse.

I'm not sure how to evaluate this statement. What do you mean by "keeping people from seeing Goatse"? Banning? Voluntarily choosing not to spread it? A filter like the one proposed in Australia that checks every request to the outside world?

Comment author: shokwave 11 December 2010 07:15:13AM 1 point [-]

Censoring posts that display Goatse on LessWrong.

Generally, censoring posts that display Goatse on non-Goatse websites.

Comment author: Vaniver 11 December 2010 04:28:26PM 7 points [-]

I am much more sympathetic to "keeping goatse off of site X" than "keeping people from seeing goatse," and so that's a reasonable policy. If your site is about posting pictures of cute kittens, then goatse is not a picture of a cute kitten.

However, it seems to me that suspected Langford basilisks are part of the material of LessWrong. Imagine someone posted in the discussion "hey guys, I really want to be an atheist but I can't stop worrying about whether or not the Rapture will happen, and if it does life will suck." It seems to me that we would have a lot to say to them about how they could approach the situation more rationally.

And, if Langford basilisks exist, religion has found them. Someone got a nightmare because of Roko's idea, but people fainted upon hearing Sinners in the Hands of an Angry God. Why are we not looking for the Perseus for this Medusa? If rationality is like an immune system, and we're interested in refining our rationality, we ought to be looking for antibodies.

Comment author: shokwave 11 December 2010 04:56:26PM 1 point [-]

However, it seems to me that suspected Langford basilisks are part of the material of LessWrong.

It seems to me that Eliezer's response as moderator of LessWrong strongly implies that he does not believe this is the case. Your goal, then, would be to convince Eliezer that it ought to be part of the LessWrong syllabus, as it were. Cialdini's Influence and other texts would probably advise you to work within his restrictions and conform to his desires as much as practical - on a site like LessWrong, though, I am not sure how applicable the advice would be, and in any case I don't mean to be prescriptive about it.

Comment author: Vaniver 11 December 2010 05:10:40PM 1 point [-]

Your goal, then, would be to convince Eliezer that it ought to be part of the LessWrong syllabus, as it were.

Right. I see a few paths to do that that may work (and no, holding the future hostage is not one of them).

Comment author: katydee 11 December 2010 08:02:22AM 2 points [-]

Is Goatse supposed to be a big deal? Someone showed it to me and I literally said "who cares?"

Comment author: wedrifid 11 December 2010 08:25:25AM 1 point [-]

Is Goatse supposed to be a big deal? Someone showed it to me and I literally said "who cares?"

I totally agree. There are far more important internet requests that my (Australian) government should be trying to filter. Priorities people!

Comment author: shokwave 11 December 2010 12:00:00PM 0 points [-]
Comment author: katydee 11 December 2010 06:28:07PM 1 point [-]

I feel like reaction videos are biased towards people who have funny or dramatic reactions, but point taken.

Comment author: Vladimir_Nesov 10 December 2010 06:14:13PM 1 point [-]

I don't understand this. (Play on conservation of expected evidence? In what way?)

Comment author: shokwave 10 December 2010 06:30:56PM 4 points [-]

Normal updating.

  • Original prior for basilik-danger.
  • Eliezer_Yudkowsky stares at basilisk, turns to stone (read: engages idea, decides to censor). Revise pr(basilisk-danger) upwards.
  • FormallyknownasRoko stares at basilisk, turns to stone (read: appears to truly wish e had never thought it). Revise pr(basilisk-danger) upwards.
  • Vladimir_Nesov stares at basilisk, turns to stone (read: engages idea, decides it is dangerous). Revise pr(basilisk-danger) upwards.
  • Vaniver stares at basilisk, is unharmed (read: engages idea, decides it is not dangerous). Revise pr(basilisk-danger) downwards.
  • Posterior is higher than original prior.

For the posterior to equal or lower than the prior, Vaniver would have to be more a rationalist than Eliezer, Roko, and you put together.

Comment author: Jack 10 December 2010 07:16:46PM 7 points [-]

Okay, but more than four people have engaged with the idea. Should we take a poll?

The problem of course is that majorities often believe stupid things. That is why a free marketplace of ideas free from censorship is a really good thing! The obvious thing to do is exchange information until agreement but we can't do that, at least not here.

Also, the people who think it should be censored all seem to disagree about how dangerous the idea really is, suggesting it isn't clear how it is dangerous. It also seems plausible that some people have influenced the thinking of other people- for example it looks like Roko regretted posting after talking to Eliezer. While Roko's regret is evidence that Eliezer is right, it isn't the same as independent/blind confirmation that the idea is dangerous.

Comment author: shokwave 10 December 2010 07:36:23PM -2 points [-]

The problem of course is that majorities often believe stupid things.

When you give all agents equal weight, sure. Without taking a poll of anything except my memory, Eliezer+Roko+VladNesov+Alicorn are against, DavidGerard+waitingforgodel+vaniver are for. Others are more sidelined than supporting a particular side.

The obvious thing to do is exchange information until agreement but we can't do that, at least not here.

Aumann agreement works in the case of hidden information - all you need are posteriors and common knowledge of the event alone.

While Roko's regret is evidence that Eliezer is right, it isn't the same as independent/blind confirmation that the idea is dangerous.

Roko increased his estimation and Eliezer decreased his estimation - and the amounts they did so are balanced according to the strength of their private signals. Looking at two Aumann-agreed conclusions gives you the same evidence as looking that the pre-Aumann (differing) conclusions - the same way that 10, 10 gives you the same average as 5, 15.

Comment author: TheOtherDave 10 December 2010 08:33:58PM 7 points [-]

Others are more sidelined than supporting a particular side.

I would prefer you not treat people avoiding a discussion as evidence that people don't differentially evaluate the assertions made in that discussion.

Doing so creates a perverse incentive whereby chiming in to say "me too!" starts to feel like a valuable service, which would likely chase me off the site altogether. (Similar concerns apply to upvoting comments I agree with but don't want to see more of.)

If you are seriously interested in data about how many people believe or disbelieve certain propositions, there exist techniques for gathering that data that are more reliable than speculating.

If you aren't interested, you could just not bring it up.

Comment author: shokwave 10 December 2010 08:45:29PM 0 points [-]

I would prefer you not treat people avoiding a discussion as evidence that people don't differentially evaluate the assertions made in that discussion.

I treat them as not having given me evidence either way. I honestly don't know how I could treat them otherwise.

Comment author: wedrifid 11 December 2010 08:34:43AM 1 point [-]

I treat them as not having given me evidence either way. I honestly don't know how I could treat them otherwise.

It is extremely hard to give no evidence by making a decision, even a decision to do nothing.

Comment author: shokwave 11 December 2010 12:16:53PM 0 points [-]

Okay. It is not that they give no evidence by remaining out of the discussion - it is that the evidence they give is spread equally over all possibilities. I don't know enough about these people to say that discussion-abstainers are uniformly in support or in opposition to the idea. The best I can do is assume they are equally distributed between support and opposition, and not incorrectly constrain my anticipations.

Comment author: TheOtherDave 11 December 2010 04:24:21PM 0 points [-]

the best I can do is assume they are equally distributed between support and opposition

You can do better than that along a number of different dimensions.

But even before getting there, it seems important to ask whether our unexpressed beliefs are relevant.

That is, if it turned out that instead of "equally distributed between support and opposition", we are 70% on one side, or 90%, or 99%, or that there are third options with significant membership, would that information significantly affect your current confidence levels about what you believe?

If our unexpressed opinions aren't relevant, you can just not talk about them at all, just like you don't talk about millions of other things that you don't know and don't matter to you.

If they are relevant, one thing you could do is, y'know, research. That is, set up a poll clearly articulating the question and the answers that would affect your beliefs and let people vote for their preferred answers. That would be significantly better than assuming equal distribution.

Another thing you could do, if gathering data is unpalatable, is look at the differential characteristics of groups that express one opinion or another and try to estimate what percentage of the site shares which characteristics.

Comment author: TheOtherDave 10 December 2010 09:03:21PM 0 points [-]

The sentence I quoted sounded to me as though you were treating those of us who've remained "sidelined" as evidence of something. But if you were instead just bringing us up as an example of something that provides no evidence of anything, and if that was clear to everyone else, then I'm content.

Comment author: shokwave 11 December 2010 08:24:13AM 0 points [-]

I think I had a weird concept of what 'sidelined' meant in my head when I was writing that. Certainly, it seems out of place to me now.

Comment author: Jack 10 December 2010 08:00:24PM *  3 points [-]

Without taking a poll of anything except my memory, Eliezer+Roko+VladNesov+Alicorn are against, DavidGerard+waitingforgodel+vaniver are for.

I'm for. I believe Tim Tyler is for.

Aumann agreement works in the case of hidden information - all you need are posteriors and common knowledge of the event alone.

Human's have this unfortunate feature of not being logically omniscient. In such cases where people don't see all the logical implications of an argument we can treat those implications as hidden information. If this wasn't the case then the censorship would be totally unnecessary as Roko's argument didn't actually include new information. We would have all turned to stone already.

Roko increased his estimation and Eliezer decreased his estimation - and the amounts they did so are balanced according to the strength of their private signals.

There is no way for you to have accurately assessed this. Roko and Eliezer aren't idealized Bayesian agents, it is extremely unlikely they performed a perfect Aumann agreement. If one is more persuasive than the other for reasons other than the evidence they share than their combined support for the proposition may not be worth the same as two people who independently came to support the proposition. Besides which, according to you, what information did they share exactly?

Comment author: FormallyknownasRoko 10 December 2010 08:05:01PM *  2 points [-]

I had a private email conversation with Eliezer that did involve a process of logical discourse, and another with Carl.

Also, when I posted the material, I hadn't thought it through. One I had thought it through, I realized that I had accidentally said more than I should have done.

Comment author: shokwave 10 December 2010 08:35:09PM *  0 points [-]

David_Gerard, Jack, timtyler, waitingforgodel, and Vaniver do not currently outweigh Eliezer_Yudkowsky, FormallyknownasRoko, Vladimir_Nesov, and Alicorn, as of now, in my mind.

It does not need to be a perfect Aumann agreement; a merely good one will still reduce the chances of overcounting or undercounting either side's evidence well below the acceptable limits.

There is no way for you to have accurately assessed this. Roko and Eliezer aren't idealized Bayesian agents, it is extremely unlikely they performed a perfect Aumann agreement.

They are approximations of Bayesian agents, and it is extremely likely they performed an approximate Aumann agreement.

To settle this particular question, however, I will pay money. I promise to donate 50 dollars to the Singularity Institute for Artificial Intelligence, independent of other plans to donate, if Eliezer confirms that he did revise his estimate down; or if he confirms that he did not revise his estimate down. Payable within two weeks of Eliezer's comment.

Comment author: TheOtherDave 10 December 2010 08:42:45PM *  1 point [-]

I'm curious: if he confirms instead that the change in his estimate, if there was one, was small enough relative to his estimate that he can't reliably detect it or detect its absence, although he infers that he updated using more or less the same reasoning you use above, will you donate or not?

Comment author: shokwave 10 December 2010 08:47:11PM *  0 points [-]

I will donate.

I would donate even if he said that he revised his estimate upwards.

I would then seriously reconsider my evaluation of him, but as it stands the offer is for him to weigh in at all, not weigh in on my side.

edit: I misparsed your comment. That particular answer would dance very close to 'no comment', but unless it seemed constructed that way on purpose, I would still donate.

Comment author: TheOtherDave 10 December 2010 09:11:44PM 0 points [-]

Yeah, that's fair. One of the things I was curious about was, in fact, whether you would take that answer as a hedge, but "it depends" is a perfectly legitimate answer to that question.

Comment author: Vaniver 10 December 2010 07:00:45PM 4 points [-]

For the posterior to equal or lower than the prior, Vaniver would have to be more a rationalist than Eliezer, Roko, and you put together.

How many of me would there have to be for that to work?

Also, why is rationalism the risk factor for this basilisk? Maybe the basilisk only turns to stone people with brown eyes (or the appropriate mental analog).

Comment author: shokwave 10 December 2010 07:25:11PM *  0 points [-]

How many of me would there have to be for that to work?

Only one; I meant 'you' in that line to refer to Vlad. It does raise the question "how many people disagree before I side with them instead of Eliezer/Roko/Vlad". And the answer to that is ... complicated. Each person's rationality, modified by how much it was applied in this particular case, is the weight I give to their evidence; then the full calculation of evidence for and against should bring my prior to within epsilon but preferably below my original prior for me to decide the idea is safe.

Also, why is rationalism the risk factor for this basilisk?

Rationalism is the ability to think well and this is a dangerous idea. If it were a dangerous bacterium then immune system would be the risk factor.

Comment author: Vaniver 11 December 2010 01:05:23AM 2 points [-]

Rationalism is the ability to think well and this is a dangerous idea. If it were a dangerous bacterium then immune system would be the risk factor.

Generally, if your immune system is fighting something, you're already sick. Most pathogens are benign or don't have the keys to your locks. This might be a similar situation- the idea is only troubling if your lock fits it- and it seems like then there would be rational methods to erode that fear (like the immune system mobs an infection).

Comment author: shokwave 11 December 2010 07:26:33AM 0 points [-]

The analogy definitely breaks down, doesn't it? What I had in mind was Eliezer, Roko, and Vlad saying "I got sick from this infection" and you saying "I did not get sick from this infection" - I would look at how strong each person's immune system is.

So if Eliezer, Roko, and Vlad all had weak immune systems and yours was quite robust, I would conclude that the bacterium in question is not particularly virulent. But if three robust immune systems all fell sick, and one robust immune system did not, I would be forced to decide between some hypotheses:

  • the first three are actually weak immune systems
  • the fourth was not properly exposed to the bacterium
  • the fourth has a condition that makes it immune
  • the bacterium is not virulent, the first three got unlucky

On the evidence I have, the middle two seem more likely than the first and last hypotheses.

Comment author: Vaniver 11 December 2010 04:18:31PM 1 point [-]

I agree- my money is on #3 (but I'm not sure whether I would structure is as "fourth is immune" or "first three are vulnerable"- both are correct, but which is more natural word to use depends on the demographic response).

Comment author: David_Gerard 10 December 2010 07:58:43PM 2 points [-]

Rationalism is the ability to think well and this is a dangerous idea. If it were a dangerous bacterium then immune system would be the risk factor.

Er, are you describing rationalism (I note you say that and not "rationality") as susceptible to autoimmune disorders? More so than in this post?

Comment deleted 10 December 2010 06:54:55PM [-]
Comment author: shokwave 10 December 2010 07:03:59PM -1 points [-]

Ensuring that is part of being a rationalist; if EY, Roko, and Vlad (apparently Alicorn as well?) were bad at error-checking and Vaniver was good at it, that would be sufficient to say that Vaniver is a better rationalist than E R V (A?) put together.

Comment author: David_Gerard 10 December 2010 07:36:55PM *  7 points [-]

Certainly. However, error-checking oneself is notoriously less effective than having outsiders do so.

"For the computer security community, the moral is obvious: if you are designing a system whose functions include providing evidence, it had better be able to withstand hostile review." - Ross Anderson, RISKS Digest vol 18 no 25

Until a clever new thing has had decent outside review, it just doesn't count as knowledge yet.

Comment author: shokwave 10 December 2010 07:46:44PM -2 points [-]

Certainly. However, error-checking oneself is notoriously less effective than having outsiders do so.

That Eliezer wrote the Sequences and appears to think according to their rules and is aware of Löb's Theorem is strong evidence that he is good at error-checking himself.

Comment author: David_Gerard 10 December 2010 08:02:24PM *  3 points [-]

That Eliezer wrote the Sequences and appears to think according to their rules and is aware of Löb's Theorem is strong evidence that he is good at error-checking himself.

That's pretty much a circular argument. How's the third-party verifiable evidence look?

Comment author: shokwave 10 December 2010 08:37:18PM *  0 points [-]

How's the third-party verifiable evidence look?

I dunno. Do the Sequences smell like bullshit to you?

edit: this is needlessly antagonistic. Sorry.

Comment author: David_Gerard 10 December 2010 08:43:08PM *  5 points [-]

Mostly not - but then I am a human full of cognitive biases. Has anyone else in the field paid them any attention? Do they have any third-party notice at all? We're talking here about somewhere north of a million words of closely-reasoned philosophy with direct relevance to that field's big questions, for example. It's quite plausible that it could be good and have no notice, because there's not that much attention to go around; but if you want me to assume it's as good as it would be with decent third-party tyre-kicking, I think I can reasonably ask for more than "the guy that wrote it and the people working at the institute he founded agree, and hey, do they look good to you?" That's really not much of an argument in favour.

Put it this way: I'd be foolish to accept cryptography with that little outside testing as good, here you're talking about operating system software for the human mind. It needs more than "the guy who wrote it and the people who work for him think it's good" for me to assume that.

Comment author: TheOtherDave 10 December 2010 08:45:22PM 0 points [-]

Upvoted to zero because of the edit.

Comment author: Manfred 10 December 2010 07:30:36PM 3 points [-]

I haven't read fluffy (I have named it fluffy), but I'd guess it's an equivalent of a virus in a monoculture: every mode of thought has its blind spots, and so to trick respectable people on LW, you only need an idea that sits in the right blind spots. No need for general properties like "only infectious to stupid people."

Alicorn throws a bit of a wrench in this, as I don't think she shares as many blind spots with the others you mention, but it's still entirely possible. This also explains the apparent resistance of outsiders, without need for Eliezer to be lying when he says he thinks fluffy was wrong.

Comment author: shokwave 10 December 2010 07:48:13PM 0 points [-]

Could also be that outsiders are resistant because they have blind spots where the idea is infectious, and respectable people on LW are respected because they do not have the blind spots - and so are infected.

I think these two views are actually the same, stated as inverses of each other. The term blind spot is problematic.

Comment author: Manfred 10 December 2010 08:02:34PM 0 points [-]

I think the term blind spot is accurate, unless (and I doubt it) Eliezer was lying when he later said fluffy was wrong. What fits the bill isn't a correct scary idea, but merely a scary idea that fits into what the reader already thinks.

Maybe fluffy is a correct scary idea, and your allocation of blind spots (or discouraging of the use of the term) is correct, but secondhand evidence points towards fluffy being incorrect but scary to some people.

Comment author: Alicorn 10 December 2010 07:34:38PM 0 points [-]

Alicorn throws a bit of a wrench in this, as I don't think she shares as many blind spots with the others you mention

I'm curious about why you think this.

Comment author: Manfred 10 December 2010 07:50:47PM 2 points [-]

Honestly? Doesn't like to argue about quantum mechanics. That I've seen :D Your posts seem to be about noticing where things fit into narratives, or introspection, or things other than esoteric decision theory speculations. If I had to come up with an idea that would trick Eliezer and Vladimir N into thinking it was dangerous, it would probably be barely plausible decision theory with a dash of many worlds.

Comment author: Jack 10 December 2010 08:34:28PM 0 points [-]

I was also surprised by your reaction to the the argument. In my case this was due to the opinions you've expressed on normative ethics.

Comment author: Alicorn 10 December 2010 08:35:09PM 0 points [-]

How are my ethical beliefs related?

Comment author: Jack 10 December 2010 08:49:22PM 0 points [-]

Answered by PM

Comment author: Vladimir_Nesov 10 December 2010 06:58:25PM *  2 points [-]

Eliezer_Yudkowsky stares at basilisk, turns to stone (read: engages idea, decides to censor). Revise pr(basilisk-danger) upwards.

This equivocates the intended meaning of turning to stone in the original discussion you replied to. Fail. (But I understand what you meant now.)

Comment author: shokwave 10 December 2010 07:06:26PM 1 point [-]

Sorry, I should not have included censoring specifically. Change the "read:"s to 'engages, reacts negatively', 'engages, does not react negatively' and the argument still functions.

Comment author: Vladimir_Nesov 10 December 2010 07:09:29PM *  2 points [-]

The argument does seem to function, but you shouldn't have used the term in a sense conflicting with intended.