Viliam_Bur comments on Our Phyg Is Not Exclusive Enough - Less Wrong

25 [deleted] 14 April 2012 09:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (513)

You are viewing a single comment's thread.

Comment author: Viliam_Bur 15 April 2012 05:19:08PM *  15 points [-]

Why in the name of the mighty Cthulhu should people on LW read the sequences? To avoid discussing the same things again and again, so that we can move to the next step. Minus the discussion about definitions of the word phyg, what exactly are talking about?

When a tree falls down in a LessWrong forest, why there is a "sound":

Because people on LW are weird. Instead of discussing natural and sane topics, such as cute kittens, iPhone prices, politics, horoscopes, celebrities, sex, et cetera, they talk abour crazy stuff like thinking machines and microscopic particles. Someone should do them a favor, turn off their computers, and buy them a few beers, so that normal people can stop being afraid of them.

Because LW is trying to change the way people think, and that is scary. Things like that are OK only when the school system is doing it, because the school system is accepted by the majority. Books are usually also accepted, but only if you borrow them from a public library.

Because people on LW pretend they know some things better that everyone else, and that's an open challenge that someone should go and kick their butts, preferably literally. Only strong or popular people are allowed to appear better. What's worse, people on LW have the courage to disagree even with some popular people, and that's pretty much insane.

When a tree falls down in a LessWrong forest, why there isn't a "sound":

There are no known examples of families broken when a family member refuses to submit to eternal knowledge of the Scriptures. (Unless such stories are censored here, of course.)

There are no known examples of violence or blackmail towards a former LW participant who decided to stop reading LW. (Unless such stories are censored here, of course.)

Minus the typical internet procrastination, there are no known examples of people who have lost years of their time and thousands of dollars, ruined their social and professional lives in their blind following of the empty promises LW gave them. (Unless such stories are censored here, of course.)

What next? Any other specific accusations? If no, why in the name of the mighty Cthulhu are we even worrying about the phyg-stuff? Just because someone may find throwing such accusations funny? Are we that prone to trolling?

Let's talk about more fruitful topic, such as: "is there a way to make Sequences more accessible to a newcomer?"

Comment author: Anatoly_Vorobey 16 April 2012 06:44:23AM 27 points [-]

Because people on LW are weird. Instead of discussing natural and sane topics, such as cute kittens, iPhone prices, politics, horoscopes, celebrities, sex, et cetera, they talk abour crazy stuff like thinking machines and microscopic particles. Someone should do them a favor, turn off their computers, and buy them a few beers, so that normal people can stop being afraid of them.

No, that isn't it. LW isn't at all special in that respect - a huge number of specialized communities exist on the net which talk about "crazy stuff", but no one suspects them of being phygs. Your self-deprecating description is a sort of applause lights for LW that's not really warranted.

Because LW is trying to change the way people think, and that is scary. Things like that are OK only when the school system is doing it, because the school system is accepted by the majority. Books are usually also accepted, but only if you borrow them from a public library.

No, that isn't it. Every self-help book (of which there's a huge industry, and most of which are complete crap) is "trying to change the way people think", and nobody sees that as weird. The Khan academy is challenging the school system, and nobody thinks they're phyggish. Attempts to change the way people think are utterly commonplace, both small-scale and large-scale. And the part about books and public libraries is just weird (what?).

Because people on LW pretend they know some things better that everyone else, and that's an open challenge that someone should go and kick their butts, preferably literally.

Unwarranted applause lights again. Everybody pretends they know some things better than everyone else. Certainly any community does that rallies around experts on some particular topic. With "preferably literally" you cross over into the whining victimhood territory.

What's worse, people on LW have the courage to disagree even with some popular people, and that's pretty much insane.

The self-pandering here is particularly strong, almost middle-school grade stuff.

You've done a very poor job trying to explain why LW is accused of being phyggish.

There are no known examples of families broken when a family member refuses to submit to eternal knowledge of the Scriptures. [...] There are no known examples of violence or blackmail towards a former LW participant who decided to stop reading LW. [...] Minus the typical internet procrastination, there are no known examples of people who have lost years of their time and thousands of dollars, ruined their social and professional lives in their blind following of the empty promises LW gave them.

This, on the other hand, is a great, very strong point that everyone who finds themselves wary of (perceived or actual) phyggishness on LW should remind themselves of. I'm thinking of myself in particular, and thank you for this strong reminder, so forcefully phrased. I have to be doing something wrong, since I frequently ponder about this or that comment on LW that seems to exemplify phyggish thinking to me, but I never counter to myself with something like what I just quoted.

Comment author: Viliam_Bur 16 April 2012 09:33:48AM 18 points [-]

Thanks for comments. What I wrote was exaggerated, written under strong emotions, when I realized that the whole phyg discussion does not make sense, because there is no real harm, only some people made nervous by some pattern matching. So I tried to list the patterns which match... and then those which don't.

My assumption is that there are three factors which together make the bad impression; separately they are less harmful. Being only "weird" is pretty normal. Being "weird + thorough", for example memorizing all Star Trek episodes, is more disturbing, but it only seems to harm the given individual. Majority will make fun of such individuals, they are seen as at the bottom of pecking order, and they kind of accept it.

The third factor is when someone refuses to accept the position at the bottom. It is the difference between saying "yeah, we read sci-fi about parallel universes, and we know it's not real, ha-ha silly us" and saying "actually, our intepretation of quantum physics is right, and you are wrong, that's the fact, no excuses". This is the part that makes people angry. You are allowed to take the position of authority only if you are a socially accepted authority. (A university professor is allowed to speak about quantum physics in this manner, a CEO is allowed to speak about money this way, a football champion is allowed to speak about football this way, etc.) This is breaking a social rule, and it has consequences.

Every self-help book (of which there's a huge industry, and most of which are complete crap) is "trying to change the way people think", and nobody sees that as weird.

A self-help book is safe. A self-help organization, not so much. (I mean an organization of people trying to change themselves, such as Alcoholics Anonymous, not a self-help publishing/selling company.)

The Khan academy is challenging the school system, and nobody thinks they're phyggish.

They are supplementing the school system, not criticizing it. The schools can safely ignore them. Khan Academy is admired by some people, but generally it remains at the bottom of the pecking order. This would change for example if they started openly criticizing the school system, and telling people to take their children away from schools.

Generally I think that when people talk about phygs, the reason is that their instinct is saying: "inside of your group, a strong subgroup is forming". A survival reaction is to call attention of the remaining group members to destroy this subgroup together before it becomes strong enough. You can avoid this reaction if the subgroup signals weakness, or if it signals loyalty to the currect group leadership; in both cases, the subgroup does not threaten existing order.

Assuming this instinct is real, we can't change it; we can just avoid triggering the reaction. How exactly? One way is to signal harmlessness; but this seems incompatible with our commitment to truth and the spirit of tsuyoku naritai. Other way is to fall below radar by using an obscure technical speach; but this seems incompatible with our goal of raising the sanity waterline (we must be comprehensive to public). Yet other way is to signal loyalty to the regime, such as Singularity Institute publishing in peer-reviewed journals. Even this is difficult, because irrationality is very popular, so by attacking irrationality we inevitable attack many popular things. We should choose our battles wisely. But this is the way I would prefer. Perhaps there is yet another way that I forgot.

Comment author: Pentashagon 04 February 2013 07:20:14PM -2 points [-]

Thanks for comments. What I wrote was exaggerated, written under strong emotions, when I realized that the whole phyg discussion does not make sense, because there is no real harm, only some people made nervous by some pattern matching. So I tried to list the patterns which match... and then those which don't.

If the phyg-meme gets really bad we can just rename the site "lessharmful.com".

Comment author: David_Gerard 16 April 2012 11:16:32AM *  21 points [-]

It's not the Googleability of "phyg". One recent real-life example is a programmer who emailed me deeply concerned (because I wrote large chunks of the RW article on LW). They were seriously worried about LessWrong's potential for decompartmentalising really bad ideas, given the strong local support for complete decompartmentalisation, by this detailed exploration of how to destroy semiconductor manufacture to head off the uFAI. I had to reassure them that Gwern really is not a crazy person and had no intention of sabotaging Intel worldwide, but was just exploring the consequences of local ideas. (I'm not sure this succeeded in reassuring them.)

But, y'know, if you don't want people to worry you might go crazy-nerd dangerous, then not writing up plans for ideology-motivated terrorist assaults on the semiconductor industry strikes me as a good start.

Edit: Technically just sabotage, not "terrorism" per se. Not that that would assuage qualms non-negligibly.

Comment author: loup-vaillant 16 April 2012 05:05:34PM 13 points [-]

On your last point, I have to cite our all-*cough*-wise Professor Quirrell

"Such dangers," said Professor Quirrell coldly, "are to be discussed in offices like this one, not in speeches. The fools […] are not interested in complications and caution. Present them with anything more nuanced than a rousing cheer, and you will face your war alone.

Comment author: [deleted] 16 April 2012 11:50:26AM *  3 points [-]

Nevermind that there were no actual plans for destroying fabs, and that the whole "terrorist plot" seems to be a collective hallucination.

Nevermind that the author in question has exhaustively argued that terrorism is ineffective.

Comment author: David_Gerard 16 April 2012 12:02:23PM 9 points [-]

Yeah, but he didn't do it right there in that essay. And saying "AI is dangerous, stopping Moore's Law might help, here's how fragile semiconductor manufacture is, just saying" still read to someone (including several commenters on the post itself) as bloody obviously implying terrorism.

You're pointing out it doesn't technically say that, but multiple people coming to that essay have taken it that way. You can say "ha! They're wrong", but I nevertheless submit that if PR is a consideration, the essay strikes me as unlikely to be outweighed by using rot13 for SEO.

Comment author: [deleted] 16 April 2012 12:25:07PM *  0 points [-]

Yes, I accept that it's a problem that everyone and their mother leapt to the false conclusion that he was advocating terrorism. I'm not saying anything like "Ha! They're wrong!" I'm lamenting the lamentable state of affairs that led to so many people to jump to a false conclusion.

Comment author: Nick_Tarleton 17 April 2012 03:38:31AM 4 points [-]

Meaning does not excuse impact, and on some level you appear to still be making excuses. If you're going to reason about impressions (I'm not saying that you should, it's very easy to go too far in worrying about sounding respectable), you should probably fully compartmentalize (ha!) whether a conclusion a normal person might reach is false.

Comment author: [deleted] 17 April 2012 10:18:11AM 0 points [-]

I'm not making excuses.

Talking about one aspect of a problem does not imply that other aspects of the problem are not important. But honestly, that debate is stale and appears to have had little impact on the author. So what's the point in rehashing all of that?

Comment author: David_Gerard 16 April 2012 01:51:35PM *  7 points [-]

"Just saying" is really not a disclaimer at all. c.f. publishing lists of abortion doctors and saying you didn't intend lunatics to kill them - if you say "we were just saying", the courts say "no you really weren't."

We don't have a demonstrated lunatic hazard on LW (though we have had unstable people severely traumatised by discussions and their implications, e.g. Roko's Forbidden Thread), but "just saying" in this manner still brings past dangerous behaviour along these lines to mind; and, given that decompartmentalising toxic waste is a known nerd hazard, this may not even be an unreasonable worry.

Comment author: [deleted] 16 April 2012 02:34:30PM *  1 point [-]

As far as I can tell, "just saying" is a phrase you introduced to this conversation, and not one that appears anywhere in the original post or its comments. I don't recall saying anything about disclaimers, either.

So what are you really trying to say here?

Comment author: TheOtherDave 16 April 2012 03:24:48PM 4 points [-]

I understood "just saying" as a reference to the argument you imply here. That is, you are treating the object-level rejection of terrorism as definitive and rejecting the audience's inference of endorsement of terrorism as a simple error, and DG is observing that treating the object-level rejection as definitive isn't something you can take for granted.

Comment author: David_Gerard 16 April 2012 03:25:58PM *  7 points [-]

It's a name for the style of argument: that it's not advocating people do these things, it's just saying that uFAI is a problem, slowing Moore's Law might help and by the way here's the vulnerabilities of Intel's setup. Reasonable people assume that 2 and 2 can in fact be added to make 4, even if 4 is not mentioned in the original. This is a really simple and obvious point.

Note that I am not intending to claim that the implication was Gwern's original intention (as I note way up there, I don't think it is); I'm saying it's a property of the text as rendered. And that me saying it's a property of the text is supported by multiple people adding 2 and 2 for this result, even if arguably they're adding 2 and 2 and getting 666.

Comment author: [deleted] 16 April 2012 03:42:05PM 0 points [-]

It's completely orthogonal to the point that I'm making.

If somebody reads something and comes to a strange conclusion, there's got to be some sort of five-second level trigger that stops them and says, "Wait, is this really what they're saying?" The responses to the essay made it evident that there's a lot of people that failed to have that reaction in that case.

That point is completely independent from any aesthetic/ethical judgments regarding the essay itself. If you want to debate that, I suggest talking to the author, and not me.

Comment author: khafra 16 April 2012 03:16:50PM 1 point [-]

I agree that it's not fair to blame LW posters for the problem. However, I can't think of any route to patching the problem that doesn't involve either blaming LW posters, or doing nontrivial mind alterations on a majority of the general population.

Comment author: Viliam_Bur 16 April 2012 01:52:42PM 1 point [-]

Anyway, we shouldn't make it too easy for people to get the false conlusion, and we should err on side of caution.

Having said this, I join your lamentations.

Comment author: jacoblyles 14 October 2012 07:03:27PM *  4 points [-]

Nevermind the fact that LW actually believes that uFAI has infinitely negative utility and that FAI has infinitely positive utility (see arguments for why SIAI is the optimal charity). That people conclude that acts that most people would consider immoral are justified by this reasoning, well I don't know where they got that from. Certainly not these pages.

Ordinarily, I would count on people's unwillingness to act on any belief they hold that is too far outside the social norm. But that kind of thinking is irrational, and irrational restraint has a bad rep here ("shut up and calculate!")

LW scares me. It's straightforward to take the reasoning of LW and conclude that terrorism and murder are justified.

Comment author: gwern 14 October 2012 07:54:41PM 0 points [-]

LW scares me. It's straightforward to take the reasoning of LW and conclude that terrorism and murder are justified.

Is there any ideology or sect of which that could not be said? Let us recall the bloody Taoist and Buddhist rebellions or wars in East Asian history and endorsements of wars of conquest, if we shy away from Western examples.

Comment author: jacoblyles 14 October 2012 08:07:30PM *  0 points [-]

Oh sure, there are plenty of other religions as dangerous as the SIAI. It's just strange to see one growing here among highly intelligent people who spend a ton of time discussing the flaws in human reasoning that lead to exactly this kind of behavior.

However, there are ideologies that don't contain shards of infinite utility, or that contain a precautionary principle that guards against shards of infinite utility that crop up. They'll say things like "don't trust your reasoning if it leads you to do awful things" (again, compare that to "shut up and calculate"). For example, political conservatism is based on a strong precautionary principle. It was developed in response to the horrors wrought by the French Revolution.

One of the big black marks on the SIAI/LW is the seldom discussed justification for murder and terrorism that is a straightforward result of extrapolating the locally accepted morality.

Comment author: gwern 14 October 2012 08:38:22PM 7 points [-]

However, there are ideologies that don't contain shards of infinite utility, or that contain a precautionary principle that guards against shards of infinite utility that crop up. They'll say things like "don't trust your reasoning if it leads you to do awful things" (again, compare that to "shut up and calculate"). For example, political conservatism is based on a strong precautionary principle. It was developed in response to the horrors wrought by the French Revolution.

I don't know how you could read LW and not realize that we certainly do accept precautionary principles ("running on corrupted hardware" has its own wiki entry), that we are deeply skeptical of very large quantities or infinities (witness not one but two posts on the perennial problem of Pascal's mugging in the last week, neither of which says 'you should just bite the bullet'!), and libertarianism is heavily overrepresented compared to the general population.

One of the big black marks on the SIAI/LW is the seldom discussed justification for murder and terrorism that is a straightforward result of extrapolating the locally accepted morality.

No, one of the 'big black marks' on any form of consequentialism or utilitarianism (as has been pointed out ad nauseam over the centuries) is that. There's nothing particular to SIAI/LW there.

Comment author: jacoblyles 15 October 2012 05:02:11PM *  2 points [-]

It's true that lots of Utilitarianisms have corner cases where they support action that would normally considered awful. But most of them involve highly hypothetical scenarios that seldom happen, such as convicting an innocent man to please a mob.

The problem with LW/SIAI is that the moral monstrosities they support are much more actionable. Today, there are dozens of companies working on AI research. LW/SIAI believes that their work will be of infinite negative utility if they are successful before Eliezer invents FAI theory and he convinces them that he's not a crackpot. The fate of not just human civilization, but all of galactic civilization is at stake.

So, if any of them looks likely to be successful, such as scheduling a press conference to announce a breakthrough, then it's straightforward to see what SI/LW thinks you should do about that. Actually, given the utilities involved, a more proactive strategy may be justified, if you know what I mean.

I'm pretty sure this is going to evolve into an evil terrorist organization, and would have done so already if the population weren't so nerdy and pacifistic to begin with.

And yes, there are the occasional bits of cautionary principles on LW. But they are contradicted and overwhelmed by "shut up and calculate", which says trust your arithmetic utilitarian calculus and not your ugh fields.

Comment author: TheOtherDave 15 October 2012 05:20:13PM 4 points [-]

if any of them looks likely to be successful, such as scheduling a press conference to announce a breakthrough, then it's straightforward to see what SI/LW thinks you should do about that. Actually, given the utilities involved, a more preventative strategy may be justified as well. [..] I'm pretty sure this is going to evolve into an evil terrorist organization

I agree that it follows from (L1) the assumption of (effectively) infinite disutility from UFAI, that (L2) if we can prevent a not-guaranteed-to-be-friendly AGI from being built, we ought to. I agree that it follows from L2 that if (L3) our evolving into an evil terrorist organization minimizes the likelihood that not-guaranteed-to-be-friendly AGI is built, then (L4) we should evolve into an evil terrorist organization.

The question is whether we believe L3, and whether we ought to believe L3. Many of us don't seem to believe this.
Do you believe it?
If so, why?

Comment author: gwern 18 October 2012 03:14:18PM *  5 points [-]

I'm pretty sure this is going to evolve into an evil terrorist organization, and would have done so already if the population weren't so nerdy and pacifistic to begin with.

'Pretty sure', eh? Would you care to take a bet on this?

I'd be happy to go with a few sorts of bets, ranging from "an organization that used to be SIAI or CFAR is put on the 'Individuals and Entities Designated by the State Department Under E.O. 13224' or 'US Department of State Terrorist Designation Lists' within 30 years" to ">=2 people previously employed by SIAI or CFAR will be charged with conspiracy, premeditated murder, or attempted murder within 30 years" etc. I'd be happy to risk, on my part, amounts up to ~$1000, depending on what odds you give.

If you're worried about counterparty risk, we can probably do this on LongBets (although since they require the money upfront I'd have to reduce my bet substantially).

Comment author: gwern 16 April 2012 02:44:47PM 6 points [-]

No, that isn't it. Every self-help book (of which there's a huge industry, and most of which are complete crap) is "trying to change the way people think", and nobody sees that as weird.

Seriously?

Comment author: Anatoly_Vorobey 16 April 2012 06:46:03PM 3 points [-]

Which part of my comment are you incredulous about?

Comment author: gwern 16 April 2012 07:00:51PM 13 points [-]

That nobody sees self-help books as weird or cultlike.

Comment author: John_Maxwell_IV 16 April 2012 08:32:19PM *  1 point [-]

redacted

Comment author: whowhowho 30 January 2013 06:44:08PM *  0 points [-]

Why in the name of the mighty Cthulhu should people on LW read the sequences? To avoid discussing the same things again and again, so that we can move to the next step.

That is one of the central fallacies of LW. The Sequnces generally don't settle issues in a step-by-step way. They are made up of postings, each of which is followed by a discussion often containing a lot of "I don't see what you mean" and "I think that is wrong because". The stepwise model may be attractive, but that doesn't make it feasible. Science isn't that linear, and most of the topics dealt with are philosophy...nuff said.