You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

wedrifid comments on Open thread, February 15-28, 2013 - Less Wrong Discussion

5 Post author: David_Gerard 15 February 2013 11:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (345)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 26 February 2013 02:42:48PM *  11 points [-]

The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?

Answering the rhetorical question because the obvious answer is not what you imply [EDIT: I notice that J Taylor has made a far superior reply already]: Yes, it limits the ongoing reputational damage.

I'm not arguing with the moderation policy. But I will argue with bad arguments. Continue to implement the policy. You have the authority to do so, Eliezer has the power on this particular website to grant that authority, most people don't care enough to argue against that behavior (I certainly don't) and you can always delete the objections with only minimal consequences. But once you choose to make arguments that appeal to reason rather than the preferences of the person with legal power then you can be wrong.

At this point, let's not taunt people with the right kind of mental pathology to be made very uncomfortable by the basilisk or meta-set of basilisks.

I've had people come to me who are traumatised by basilisk considerations. From what I can tell almost all of the trauma is attributable to Eliezer's behavior. The descriptions of the experience give clear indications (ie. direct self reports that are coherent) that a significant reason that they "take the basilisk seriously" is because Eliezer considers it a sufficiently big deal that he takes such drastic and emotional action. Heck, without Eliezer's response it wouldn't even have earned that title. It'd be a trivial backwater game theory question to which there are multiple practical answers.

So please, just go back to deleting basilisk talk. That would be way less harmful than trying to persuade people with reason.

Comment author: David_Gerard 27 February 2013 02:06:09PM *  6 points [-]

I've had people come to me who are traumatised by basilisk considerations. From what I can tell almost all of the trauma is attributable to Eliezer's behavior. The descriptions of the experience give clear indications (ie. direct self reports that are coherent) that a significant reason that they "take the basilisk seriously" is because Eliezer considers it a sufficiently big deal that he takes such drastic and emotional action. Heck, without Eliezer's response it wouldn't even have earned that title. It'd be a trivial backwater game theory question to which there are multiple practical answers.

I get the people who've been frightened by it because EY seems to take it seriously too. (Dmytry also gets them, which is part of why he's so perpetually pissed off at LW. He does his best to help, as a decent person would.) More generally, people distressed by it feel they can't talk about it on LW, so they come to RW contributors - addressing this was why it was made a separate article. (I have no idea why Warren Ellis then Charlie Stross happened to latch onto it - I wish they hadn't, because it was totally not ready, so I had to spend the past few days desperately fixing it up, and it's still terrible.) EY not in fact thinking it's feasible or important is a point I need to address in the last section of the RW article, to calm this concern.

Comment author: jbeshir 27 February 2013 07:06:11PM *  3 points [-]

It would be nice if you'd also address the extent to which it misrepresents other LessWrong contributors as thinking it is feasible or important (sometimes to the point of mocking them based on its own misrepresentation). People around LessWrong engage in hypothetical what-if discussions a lot; it doesn't mean that they're seriously concerned.

Lines like "Though it must be noted that LessWrong does not believe in or advocate the basilisk ... just in almost all of the pieces that add up to it." are also pretty terrible given we know only a fairly small percentage of "LessWrong" as a whole even consider unfriendly AI to be the biggest current existential risk. Really, this kind of misrepresentation of alleged, dubiously actually held extreme views as the perspective of the entire community is the bigger problem with both the LessWrong article and this one.

Comment author: David_Gerard 01 March 2013 05:25:28PM *  5 points [-]

The article is still terrible, but it's better than it was when Stross linked it. The greatest difficulty is describing the thing and the fuss accurately while explaining it to normal intelligent people without them pattern matching it to "serve the AI God or go to Hell". This is proving the very hardest part. (Let's assume for a moment 0% of them will sit down with 500K words of sequences.) I'm trying to leave it for a bit, having other things to do.

Comment deleted 02 March 2013 07:59:58AM *  [-]
Comment author: wedrifid 02 March 2013 08:38:04AM 0 points [-]

I don't see why the "pattern matching" is invalid.

It is the things that tend to go with it that are the problem. Such as the failure to understand which facets are different and similar and the missing of the most important part of the particular case due to distraction by thoughts relevant to a different scenario.

Comment author: jbeshir 06 March 2013 05:12:10PM 0 points [-]

The pattern matching's conclusions are wrong because the information it is matching on is misleading. The article implied that there was widespread belief that the future AI should be assisted, and this was wrong. Last I looked it still implied widespread support for other beliefs incorrectly.

This isn't an indictment of pattern matching so much as a need for the information to be corrected.

Comment deleted 07 March 2013 05:53:33AM *  [-]
Comment author: jbeshir 07 March 2013 02:34:26PM 0 points [-]

Assuming by "it" you refer to the decision theory work, that UFAI is a threat, Many Worlds Interpretation, things they actually have endorsed in some fashion, it would be fair enough to talk about how the administrators have posted those things and described them as conclusions of the content, but it should accurately convey that that was the extent of "pushing" them. Written from a neutral point of view with the beliefs accurately represented, informing people that the community's "leaders" have posted arguments for some unusual beliefs (which readers are entitled to judge as they wish) as part of the content would be perfectly reasonable.

It would also be reasonable to talk about the extent to which atheism is implicitly pushed in stronger fashion; theism is treated as assumed wrong in examples around the place, not constantly but to a much greater degree. I vaguely recall that the community has non-theists as a strong majority.

The problem is that this is simply not what the articles say. The articles imply strongly that the more unusual beliefs posted above are widely accepted- not that they are posted in the content but that they are believed by Less Wrong members, part of the identity of someone who is a Less Wrong user. This is simply wrong. And the difference is significant; it is incorrectly accusing all people interested in the works of a writer of being proponents of that writer's most unusual beliefs, discussed only in a small portion of their total writings. And this should be fixed so they convey an accurate impression.

The Scientology comparison is misleading in that Scientology attempts to use cult practices to achieve homogeneity of beliefs, whereas Less Wrong does not- the poll solidly demonstrates that homogeneity of beliefs is not a thing which is happening. A better analogy would be a community of fans of the works of a philosopher who wrote a lot of stuff and came to some outlandish conclusions in parts, but the fans don't largely believe that outlandish stuff. Yeah, their outlandish stuff is worth discussing- but presenting it as the belief of the community is wrong even if the philosopher alleges it all fits together. Having an accurate belief here matters, because it has greatly different consequences. There are major practical differences in how useful you'd expect the rest of the content to be, and how you'd perceive members of the community.

At present, much of the articles are written as "smear pieces" against Less Wrong's community. As a clear and egregious example, it alleges they are "libertarian", for example, clearly a shot at LW given RW's readerbase, when surveys tell us that the most common political affiliation is "liberalism", and while "libertarianism" is second, "socialism" is third. It does this while citing one of the surveys in the article itself. Many of the problems here are not subtle.

If by "it" you meant the evil AI from the future thing, it most certainly is not "the belief pushed by the organization running this place"; any reasonable definition of "pushing" something would have to meancommunicating it to people and attempting to convince them of it, and if anything they're credibly trying to stop people from learning about it. There are no secret "higher levels" of Less Wrong content only shown to the "prepared", no private venues conveying it to members as they become ready, so we can be fairly certain given publicly visible evidence that they aren't communicating it or endorsing it as a belief to even 'selected' members.

It doesn't obviously follow from anything posted on Less Wrong, it requires putting a whole bunch of parts together and assuming it is true.

Comment author: Kawoomba 07 March 2013 09:19:20AM 0 points [-]

I just watched Neil Tyson, revered by many, on The Daily Show answer the question "What do you think is the biggest threat to mankind's existence?" with "The threat of asteroids."

Now, there is a much better case to be made for the danger from future AI than it is from asteroids as an x-risk, by the transitive property Neil Tyson's beliefs would pattern match to xenu even better than MIRI's beliefs - a fiery death from the sky.

Yet we give NdGT the benefit of the doubt of why he came to his conclusion, why don't you do the same with MIRI?

Comment deleted 11 March 2013 01:56:51PM *  [-]
Comment author: Kawoomba 11 March 2013 02:33:28PM 0 points [-]

Of course, you're awesome and extremely rational and everything

Awww thanks!

Comment author: IlyaShpitser 07 March 2013 10:00:02AM *  2 points [-]

Because the asteroid thread is real, and has caused mass extinction events before. Probably more than once. AI takeoff may or may not be a real threat, and likely isn't even possible. There is a qualitative difference between these two.

Also: MIRI has a financial incentive to lie, and/or exaggerate the threat, Tyson does not. Someone might think the AI threat is just a scam MIRI folks use to pump impressionable youngsters for cash.

Comment author: Kawoomba 07 March 2013 10:09:16AM 0 points [-]

Time-scales involved, in a nutshell. What is the chance that there is an extinction level event from asteroids while we still have all our eggs in one basket (on earth), compared to e.g. threats from AI, bioengineering etcetera?

X-risk asteroid impacts every few tens of million years come out to a low probability per century, especially when considering that i.e. the impact causing the demise of the dinosaurs wouldn't even be a true x-risk for humans.

I'd agree that the error bars on the estimated asteroid x-risk probabilities are smaller than the ones on the estimated x-risk from e.g. AI, but even a small chance of the AI x-risk would beat out the minuscule one from asteroids, don't you think?