You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Viliam_Bur comments on The Singularity Institute's Arrogance Problem - Less Wrong Discussion

63 Post author: lukeprog 18 January 2012 10:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (307)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 20 January 2012 10:46:56AM *  34 points [-]

I can smell the "arrogance," but do you think any of the claims in these paragraphs is false?

I am the wrong person to ask if a "a doctorate in AI would be negatively useful". I guess it is technically useful. And I am pretty sure that it is wrong to say that others are "not remotely close to the rationality standards of Less Wrong". That's of course the case for most humans, but I think that there are quite a few people out there who are at least at the same level. I further think that it is quite funny to criticize people on whose work your arguments for risks from AI are dependent on.

But that's besides the point. Those statements are clearly false when it comes to public relations.

If you want to win in this world, as a human being, you are either smart enough to be able to overpower everyone else or you actually have to get involved in some fair amount of social engineering, signaling games and need to refine your public relations.

Are you able to solve friendly AI, without much more money, without hiring top-notch mathematicians, and then solve general intelligence to implement it and take over the world? If not, then you will at some point either need much more money or convince actual academics to work for you for free. And, most importantly, if you don't think that you will be the first to invent AGI, then you need to talk to a lot of academics, companies and probably politicians to convince them that there is a real risk and that they need to implement your friendly AI theorem.

It is of topmost importance to have an academic degree and reputation to make people listen to you. Because at some point it won't be enough to say, "I am a research fellow of the Singularity Institute who wrote a lot about rationality and cognitive biases and you are not remotely close to our rationality standards." Because at the point that you utter the word "Singularity" you have already lost. The very name of your charity already shows that you underestimate the importance of signaling.

Do you think IBM, Apple or DARPA care about a blog and a popular fanfic? Do you think that you can even talk to DARPA without first getting involved in some amount of politics, making powerful people aware of the risks? And do you think you can talk to them as a "research fellow of the Singularity Institute"? If you are lucky then they might ask someone from their staff about you. And if you are really lucky then they will say that you are for the most part well-meaning and thoughtful individuals who never quite grew out of their science-fiction addiction as adolescents (I didn't write that line myself, it's actually from an email conversation with a top-notch person that didn't give me their permission to publish it). In any case, you won't make them listen to you, let alone do what you want.

Compare the following:

Eliezer Yudkowsky, research fellow of the Singularity Institute.

Education: -

Professional Experience: -

Awards and Honors: A lot of karma on lesswrong and many people like his Harry Potter fanfiction.

vs.

Eliezer Yudkowsky, chief of research at the Institute for AI Ethics.

Education: He holds three degrees from the Massachusetts Institute of Technology: a Ph.D in mathematics, a BS in electrical engineering and computer science, and an MS in physics and computer science.

Professional Experience: He worked on various projects with renowned people making genuine insights. He is the author of numerous studies and papers.

Awards and Honors: He holds various awards and is listed in the Who's Who in computer science.

Who are people going to listen to? Well, okay...the first Eliezer might receive a lot of karma on lesswrong, the other doesn't have enough time for that.

Another problem is how you handle people who disagree with you and who you think are wrong. Concepts like "Well-Kept Gardens Die By Pacifism" will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SI negatively. And the number of those people is growing. Many won't even participate here because members are unwilling to talk to them in a charitable way. That kind of behavior causes them to group together against you. Well-kept gardens die by pacifism, others are poisoned by negative karma. A much better rule would be to keep your friends close and your enemies closer.

Think about it. Imagine how easy it would have been for me to cause serious damage to SI and the idea of risks from AI by writing different kinds of emails.

Why does that rational wiki entry about lesswrong exist? You are just lucky that they are the only people who really care about lesswrong/SI. What do you think will happen if you continue to act like you do and real experts feel uncomfortable about your statements or even threatened? It just takes one top-notch person, who becomes seriously bothered, to damage your reputation permanently.

Comment author: Viliam_Bur 20 January 2012 03:17:40PM *  7 points [-]

I mostly agree with the first 3/4 of your post. However...

Another problem is how you handle people who disagree with you and who you think are wrong. Concepts like "Well-Kept Gardens Die By Pacifism" will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SI negatively. And the number of those people is growing. Many won't even participate here because members are unwilling to talk to them in a charitable way. That kind of behavior causes them to group together against you. Well-kept gardens die by pacifism, others are poisoned by negative karma. A much better rule would be to keep your friends close and your enemies closer.

You can't make everyone happy. Whatever policy a website has, some people will leave. I have run away from a few websites that have "no censorship, except in extreme cases" policy, because the typical consequence of such policy is some users attacking other users (weighing the attack carefully to prevent moderator's action) and some users producing huge amounts of noise. And that just wastes my time.

People leaving LW should be considered on case-by-case basis. They are not all in the same category.

Why does that rational wiki entry about lesswrong exist?

To express opinions of rationalwiki authors about lesswrong, probably. And that opinion seems to be that "belief in many worlds + criticism of science = pseudoscience".

I agree with them that "nonstandard belief + criticism of science = high probability of pseudoscience". Except that: (1) among quantum physicists the belief in many worlds is not completely foreign; (2) the criticism of science seems rational to me, and to be fair, don't forget that scholarship is an officially recognized virtue at LW; (3) the criticism of naive Friendly AI approaches is correct, though I doubt the SI's ability to produce something better (so this part really may be crank), but the rest of LW again seems rational to me.

Now, how much rational are the arguments on the talk page of rational wiki? See: "the [HP:MoR link] is to a bunch of crap", "he explicitly wrote [HP:MoR] as propaganda and LessWrong readers are pretty much expected to have read it", "The stuff about 'luminosity' and self-help is definitely highly questionable", "they casually throw physics and chemistry out the window and talk about nanobots as if they can exist", "I have seen lots of examples of 'smart' writing, but have yet to encounter one of 'intelligent' writing", "bunch of scholastic idiots who think they matter somehow", "Esoteric discussions that are hard to understand without knowing a lot about math, decision theory, and most of all the exalted sequences", "Poor writing (in terms of clarity)", "[the word 'emergence'] is treated as disallowed vocabulary", "I wonder how many oracular-looking posts by EY that have become commonplaces were reactions to an AI researcher that had annoyed him that day" etc. To be fair, there are also some positive voices, such as: "Say what you like about the esoteric AI stuff, but that man knows his shit when it comes to cognitive biases and thinking", "I believe we have a wiki here about people who pursue ideas past the point of actual wrongness".

Seems to me like someone has a hammer (a wiki for criticizing pseudoscience) and suddenly everything unusual becomes a nail.

You are just lucky that they are the only people who really care about lesswrong/SI.

Frankly, most people don't care about lesswrong or SI or rational wiki.