Emile comments on Reply to Yvain on 'The Futility of Intelligence' - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (15)
I still think you should just read the sequences before posting dozens of top-level posts like this one.
I agree that telling newcomers to " go read the sequences" is rude and counter-productive, but according to Yvain you've written more than 80 top-level posts!
I sometimes consider LessWrong to be a discussion forum for people who have read the sequences" not that having read the sequences turns one into an übermensch, but it gives a set of common vocabulary and expected knowledge that prevents the discussion from going over and over basic stuff. Just like there could be a discussion forum for people who have read Marx: whether or not the members agree with each other, at least they wouldn't be going over basic stuff, talk right past each other because of radically different interpretations of some terms. And in such a forum, arguing against marxism without bothering to read any Marx would indeed be kinda rude.
That being said, I greatly respect your work emailing AI researchers!
XiXiDu hasn't read the sequences? How is he able to quote-mine Eliezer so effectively?
To be fair, he doesn't mine Eliezer particularly effectively. Just often. I could quote mine to make Eliezer look like a dick far more effectively (having actually read Eliezer's posts and understood what he says well enough to be able to pick out the ones that can be twisted the worst).
See this recent discussion with Yvain:
Xixidu:
Vladimir (not Yvain, my mistake):
I was going to upvote this comment until I got to the last line. XiXiDu's email campaign is almost certainly doing more harm than good.
That's surprising; I found having a set of views from outside the LW/SIAI cluster quite refreshing. What do you think was bad about those? My only quibble would be that I found some of the questions awkward/guiding/irrelevant; I would have prefered a better set of questions. But Xixidu improved them with time.
What WrongBot says. I approve of the project of getting outside views, and I approve of the idea of making more AI researchers aware of AI as a possible existential risk. (From what I've heard, SIAI is quietly doing this themselves with some of the more influential groups.) But XiXiDu doesn't understand SIAI's actual object-level claims, let alone the arguments that link them, and he writes AI researchers in a style that looks crankish.
I can hardly think of a better way to prejudice researchers against genuinely examining their intuitions than a wacky letter asking their opinions on an array of absurd-sounding claims with no coherent structure to them, which is exactly what he presents them with.
Agreed - some of his questions were cringe-inducing, but overall, I appreciated that series of posts because it's interesting to hear what a broad range of AI researchers have to say about the topic; some of the answers were insightful and well-argued.
I agree that sounding crankish could be a problem, but I don't think Xixidu was presenting himself as writing in LW/SIAI's name. Crankiness from some lesswrongers tarring the reputation of Eliezer's writings is hard to avoid anyway: the main problem is that there's no clear way to refer to Eliezer's writings; "The Sequences" is obscure and covers too much stuff, some of which isn't Eliezer; "Overcoming Bias" worked at the time, and "Less Wrong" is a name that wasn't even used when most of the core Sequences were written, and now mostly refers to the community.
I understand them well enough for the purpose of asking researchers a few questions. My karma score has been 5700+ at some point. Do you think that would have been possible without having a basic understanding of some of the underlying ideas?
I think this is just unfair. I do not think that my email, or the questions I asked were wrong. There is also no way to ask a lot of researchers about this topic without sounding a bit wacky.
All you could do is 1) tell them to read the Sequences 2) not to ask them at all and just trust Eliezer Yudkowsky. 1) Won't work since they have no reason to suspect that Eliezer Yudkowsky knows some incredible secret knowledge they don't. 2) Is no option for me. He could tell me anything about AI and I would have no way to tell if he knows what he is talking about.
Yes. I attribute my 18k karma to excessive participation. If I didn't have a clue what I was talking about it would have taken longer but I would have collected thousands of karma anyway just by writing many comments with correct grammar.
Karma - that is, total karma of users - means very little.
I'd kinda like to see it expressed as (total karma/total posts), that might help a little bit...
Those questionnaires are not a particularly good introduction to the LW/SI memespace. I worry that he is therefore making a poor first impression on our behalf, reducing the odds that these people will end up contributing to existential risk reduction and/or friendliness research.