WrongBot comments on Existential Risk and Public Relations - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (613)
The wikipedia page on Cult Checklists includes seven independent sets of criteria for cult classification, provided by anti-cult activists who have strong incentives to cast as wide a net as possible. Singularitarianism, transhumanism, and cryonics fit none of those of lists. In most cases, it isn't even close.
I disagree with your assessment. Let's just look at Lw for starters.
Eileen Barker:
Based on that, I think Eileen Barker's list would have us believe Lw is a likely cult.
Shirley Harrison:
Based on that, I think Shirley Harrison's list would have us believe Lw is a likely cult.
Similar analysis using the other lists is left as an exercise for the reader.
That was... surprisingly surprising. Thank you.
For reasons like those you listed, and also out of some unverbalized frustration, in the last week I've been thinking pretty seriously whether I should leave LW and start hanging out somewhere else online. I'm not really interested in the Singularity, existential risks, cognitive biases, cryonics, un/Friendly AI, quantum physics or even decision theory. But I do like the quality of discussions here sometimes, and the mathematical interests of LW overlap a little with mine: people around here enjoy game theory and computability theory, though sadly not nearly as much as I do.
What other places on the Net are there for someone like me? Hacker News and Reddit look like dumbed-down versions of LW, so let's not talk about those. I solved a good bit of Project Euler once, the place is tremendously enjoyable but quite narrow-focused. The n-Category Cafe is, sadly, coming to a halt. Math Overflow looks wonderful and this question by Scott Aaronson nearly convinced me to drop everything and move there permanently. The Polymath blog is another fascinating place that is so high above LW that I feel completely underqualified to join. Unfortunately, none of these are really conducive to posting new results, and moving into academia IRL is not something I'd like to do (I've been there, thanks).
Any other links? Any advice? And please, please, nobody take this comment as a denigration of LW or a foot-stomping threat. I love you all.
My new blog "Azimuth" may not be mathy enough for you, but if you like the n-Category Cafe, it's possible you may like this one too. It's more focused on technology, environmental issues, and the future. Someday soon you'll see an interview with Eliezer! And at some point we'll probably get into decision theory as applied to real-world problems. We haven't yet.
(I don't think the n-Category Cafe is "coming to a halt", just slowing down - my change in interests means I'm posting a lot less there, and Urs Schreiber is spending most of his time developing the nLab.)
Wow.
Hello.
I didn't expect that. It feels like summoning Gauss, or something.
Thank you a lot for twf!
Link to John Baez's blog
The markup syntax here is a bit unusual and annoying - click the "Help" button at the bottom right of the edit window to get guidance on how to include hyperlinks. Unlike every other hyperlinking system, the text goes first and the URL second!
Make a top level post about the kind of thing you want to talk about. It doesn't have to be an essay, it could just be a question ("Ask Less Wrong") or a suggested topic of conversation.
I love your posts, so having seen this comment I'm going to try to write up my nascent sequence on memetic colds, aka sucker shoots, just for you. (And everyone.)
Thanks!
Same for me. My interests are more similar to your interests than to classic LW themes. There are probably many others here in the same situation. But I hope that the list of classic LW themes is not set in stone. I think people like us should try to broaden the spectrum of LW. If this attempt fails, please send me the address of the new place where you hang out online. :) But I am optimistic.
"Leaving" LW is rather strong. Would that mean not posting? Not reading the posts, or the comments? Or just reading at a low enough frequency that you decouple your sense of identity from LW?
I've been trying to decide how best to pump new life into The Octagon section of the webcomic collective forum Koala Wallop. The Octagon started off when Dresden Codak was there, and became the place for intellectual discussion and debate. The density of math and computer theoretic enthusiasts is an order of magnitude lower than here or the other places you mentioned, and those who know such stuff well are LW lurkers or posters too. There was an overkill of politics on The Octagon, the levels of expertise on subjects are all over the spectrum, and it's been slowing down for a while, but I think a good push will revive it. The main thing is that it lives inside of a larger forum, which is a silly, fun sort of community. The subforum simply has a life of it's own.
Not that I claim any ownership over it, but:
I'm going to try to more clearly brand it as "A friendly place to analytically discuss fantastic, strange or bizarre ideas."
Of course, MathOverflow isn't really a place for discussion...
At least as far as math is concerned, people not in academia can publish papers. As for the Polymath blog, I'd actually estimate that you are at about the level of most Polymath contributors, although most of the impressive work there seems to be done by a small fraction of the people there.
About Polymath: thanks! (blushes)
I have no fetish for publishing papers or having an impressive CV or whatever. The important things, for me, are these: I want to have meaningful discussions about my areas of interest, and I want my results to be useful to somebody. I have received more than a fair share of "thank yous" here on LW for clearing up mathy stuff, but it feels like I could be more useful... somewhere.
I found this amusing because by those standards, cults are everywhere. For example, I run a professional Magic: The Gathering team and am pretty sure I'm not a cult leader. Although that does sound kind of neat. Observe:
Eileen Barker: 1. When events are close we spend a lot of time socially seperate from others so as to develop and protect our research. On occasion 'Magic colonies' form for a few weeks. It's not substantially less isolating than what SIAI dos. Check. 2. I have imparted huge amounts of belief about a large subset of our world, albeit a smaller one than Eliezer is working on. Partial Check. 3. I make reasonably import, on the level of the Cryonics decision if Cryonics isn't worthwhile, decisions for my teammates and do what I need to do to make sure they follow them far more than they would without me. Check. 4. We identify other teams as 'them' reasonably often, and certain other groups are certainly viewed as the enemy. Check. 5. Nope, even fainter argument than Eliezer. 6. Again, yes, obviously.
Shirley Harrison: 1. I claim a special mission that I am uniquely qualified to fufill. Not as important of one, but still. Check. 2. My writings count at least as much as the sequences. Check. 3. Not intentionally, but often new recruits have little idea what to expect. Check plus. 4. Totalitarian rules structure, and those who game too much often alienate friends and family. I've seen it many times, and far less of a cheat than saying that you'll be alienated from them when they are all dead and you're not because you got frozen. Check. 5. I make people believe what I want with the exact same techniques we use here. If anything, I'm willing to use slightly darker arts. Check. 6. We make the lower level people do the grunt work, sure. Check. 7. Based on some of the deals I've made, one looking to demonize could make a weak claim. Check plus. 8. Exclusivity. In spades. Check.
I'd also note that the exercise left to the reader is much harder, because the other checklists are far harder to fudge.
On Eileen Barker:
I believe that most LW posters are not signed up for cryonics (myself included), and there is substantial disagreement about whether it's a good idea. And that disagreement has been well received by the "cult", judging by the karma scores involved.
Theism has been discussed. It is wrong. But Robert Aumann's work is still considered very important; theists are hardly dismissed as "satanic," to use Barker's word.
Of Barker's criteria, 2-4 of 6 apply to the LessWrong community, and only one ("Leaders and movements who are unequivocally focused on achieving a certain goal") applies strongly.
On Shirley Harrison:
I can't speak for Eliezer, but I suspect that if there were a person who was obviously more qualified than him to tackle some aspect of FAI, he would acknowledge it and welcome their contributions.
No. The sequences are not infallible, they have never been claimed as such, and intelligent disagreement is generally well received.
What you describe is a prosperous exaggeration, not "[t]otalitarianism and alienation of members from their families and/or friends."
Any person who promotes a charity at which they work is pushing a cult, by this interpretation. Eliezer isn't "lining his own pockets"; if someone digs up the numbers, I'll donate $50 to a charity of your choice if it turns out that SIAI pays him a salary disproportionally greater (2 sigmas?) than the average for researchers at comparable non-profits.
So that's 2-6 of Harrison's checklist items for LessWrong, none of them particularly strong.
My filters would drop LessWrong in the "probably not a cult" category, based off of those two standards.
Eliezer was compensated $88,610 in 2008 according to the Form 990 filed with the IRS and which I downloaded from GuideStar.
Wikipedia tells me that the median 2009 income in Redwood where Eliezer lives is $69,000.
(If you are curious, Tyler Emerson in Sunnyvale (median income 88.2k) makes 60k; Susan Fonseca-Klein also in Redwood was paid 37k. Total employee expenses is 200k, but the three salaries are 185k; I don't know what accounts for the difference. The form doesn't seem to say.)
In particular, there seems to be a lot of disagreement about the metaethics sequence, and to a lesser extent about timeless physics.
What exactly are Eliezer's qualifications supposed to be?
You mean, "What are Eliezer's qualifications?" Phrasing it that way makes it sound like a rhetorical attack rather than a question.
To answer the question itself: lots of time spent thinking and writing about it, and some influential publications on the subject.
I'm definitely not trying to attack anyone (and you're right my comment could be read that way). But I'm also not just curious. I figured this was the answer. Lots of time spent thinking, writing and producing influential publications on FAI is about all the qualifications one can reasonably expect (producing a provable mathematical formalization of friendliness is the kind of thing no one is qualified to do before they do it and the AI field in general is relatively new and small). And Eliezer is obviously a really smart guy. He's probably even the most likely person to solve it. But the effort to address the friendliness issue seems way too focused on him and the people around him. You shouldn't expect any one person to solve a Hard problem. Insight isn't that predictable especially when no one in the field has solved comparable problems before. Maybe Einstein was the best bet to formulate a unified field theory but a) he never did and b) he had actually had comparable insights in the past. Part of the focus on Eliezer is just an institutional and financial thing, but he and a lot of people here seem to encourage this state of affairs.
No one looks at open problems in other fields this way.
Yes, the situation isn't normal or good. But this isn't a balanced comparison, since we don't currently have a field, too few people understand the problem and had seriously thought about it. This gradually changes, and I expect will be visibly less of a problem in another 10 years.
I may have an incorrect impression, but SIAI or at least Eliezer's department seems to have a self-image comparable to the Manhattan project rather than early pioneers of a scientific field.
Eliezer's past remarks seem to have pointed to a self-image comparable to the Manhatten project. However, according the new SIAI Overview:
Eliezer has said: "I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me." Your call as to whether you believe that. (The rest of that post, and some of his other posts in that discussion, address some points similar to those that you raised.)
That said, "self-image comparable to the Manhattan project" is an unusually generous ascription of humility to SIAI and Eliezer. :P
They want to become comparable to the Manhattan project, in part by recruiting additional FAI researchers. They do not claim to be at that stage now.
I haven't seen any proves of his math skills that would justify this statement. By what evidence have you arrived at the conclusion that he can do it at all, even approach it? The sequences and the SIAI publications certainly show that he was able to compile a bunch of existing ideas into a coherent framework of rationality, yet there is not much novelty to be found anywhere.
Which statement are you talking about? Saying someone is the most likely person to do something is not the same as saying they are likely to do it. You haven't said anything in this comment than I disagree with so I don't understand what we're disputing.
Great comment.
How influential are his publications if they could not convince Ben Goertzel (SIAI/AGI researcher), someone who has read Yudkowsky's publications and all of the LW sequences? You could argue that he and other people don't have the smarts to grasp Yudkowsky's arguments, but who does? Either Yudkowsky is so smart that some academics are unable to appreciate his work or there is another problem. How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?
The problem here is that telling someone that Yudkowsky spent a lot of time thinking and writing about something is not a qualification. Further it does not guarantee that he would acknowledge and welcome the contributions of others who disagree.
The motivated cognition here is pretty thick. Writing is influential when many people are influenced by it. It doesn't have to be free of people who disagree with it to be influential, and it doesn't even have to be correct.
Level up first. I can't evaluate physics research, so I just accept that I can't tell which of it is correct; I don't try to figure it out from the politics of physicists arguing with each other, because that doesn't work.
But what does this mean regarding my support of the SIAI? Imagine I was a politician who had no time to level up first but who had to decide if they allowed for some particle accelerator or AGI project to be financed at all or go ahead with full support and without further security measures.
Would you tell a politician to go and read the sequences and if, after reading the publications, they don't see why AGI research is as dangerous as being portrait by the SIAI they should just forget about it and stop trying to figure out what to do? Or do you simply tell them to trust a fringe group which does predict that a given particle accelerator might destroy the world when all experts claim there is no risk?
You talked about Yudkowsky's influential publications. I thought you meant some academic papers, not the LW sequences. They indeed influenced some people, yet I don't think they influenced the right people.
Downvoted for this:
Your interpretation seems uncharitable. I find it unlikely that you have enough information to make a confident judgment that XiXiDu's comment is born of motivated cognition to a greater extent than your own comments.
Moreover, I believe that even when such statements are true, one should avoid making them when possible as they're easily construed as personal attacks which tend to spawn an emotional reaction in one's conversation partners pushing them into an Arguments as soldiers mode which is detrimental to rational discourse.
Strongly disagree. To improve, you need to know where to improve, and if people avoid telling you when and where you're going wrong, you won't improve.
On this blog, any conversational partners should definitely not be construing anything as personal attacks.
On this blog, any person should definitely be resisting this push.
I did not say that one should avoid telling people when and where they're going wrong. I was objecting to the practice of questioning people's motivations. For the most part I don't think that questioning somebody's motivations is helpful to him or her.
I disagree. Sometimes commentators make statements which are pretty clearly intended to be personal attacks and it would be epistemically irrational to believe otherwise. Just because the blog is labeled as being devoted to the art of refining rationality doesn't mean that the commentators are always above this sort of thing.
I agree with you insofar as I think that one work to interpret comments charitably.
I agree, but this is not relevant to the question of whether one should be avoiding exerting such a push in the first place.
Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer's stance.
For what it's worth, Eliezer has been influential/persuasive enough to get the SIAI created and funded despite having absolutely no academic qualifications. He's also responsible for coining "Seed AI".
Indeed, I was just trying to figure out how someone with money or power who wants to know what is the right thing to do but who does not have the smarts should do. Someone like a politician or billionaire who would either like to support some AGI research or the SIAI. How are they going to decide what to do if all AGI experts tell them that there is no risk from AGI research and that the SIAI is a cult when at the same time the SIAI tells them the AGI experts are intellectual impotent and the SIAI is the only hope for humanity to survive the AI revolution. What should someone who does not have the expertise or smarts to estimate those claims, but who nevertheless has to decide how to use his power, should do? I believe this is not an unrealistic scenario as many rich or powerful people want to do the right thing, yet do not have the smarts to see why they should trust Yudkowsky instead of hundreds of experts.
Interesting, when did he come up with the concept of "Seed AI". Because it is mentioned in Karl Schroeder's Ventus (Tor Books, 2000.) ISBN 978-0312871970.
Didn't find the phrase "Seed AI" there. One plot element is a "resurrection seed", which is created by an existing, mature evil AI to grow itself back together in case it's main manifestation is destroyed. A Seed AI is a different concept, it's something the pre-AI engineers put together that grows into a superhuman AI by rewriting itself more and more powerful. A Seed AI is specifically a method to get to AGI from not having one, not just an AI that grows from a seed-like thing. I don't remember recursive self-improvement being mentioned with the seed in Ventus.
A precursor concept where the initial AI bootstraps itself by merely learning things, not necessarily by rewriting it's own architecture, goes all the way back to Alan Turing's 1950 paper on machine intelligence.
Here is a quote from Ventus:
[...]
It is further explained that the Winds were designed to evolve on their own so they are not mere puppets of human intentions but possess their own intrinsic architecture.
In other places in the book it is explained how humans did not create their AI Gods but that they evolved themselves from seeds designed by humans.
I don't think the failure of someone to be convinced of some position is ever strong evidence against that position. But this argument here is genuinely terrible. I disagree with person x about y, therefore person x is wrong about z? Do we even have to go into why this is fallacious?
Ever is a strong word. If a competent expert in a field who has a known tendency to err slightly on the side of too much openness to the cutting edge fails to be convinced by a new finding within his field that says an awful lot.
That is simply not the form of the argument you quote. "Ben Goertzel believes in psychic phenomenon" can not be represented as "I disagree with person x ".
I'm being generous and giving the original comment credit for an implicit premise. As stated the argument is "Person x believes y, therefore person x is wrong about z." this is so obviously wrong it makes my head hurt. WrongBot's point is that someone has to have a poor reasoning capacity to believe in psy. But since he didn't provide any evidence to that effect it reduces to 'I disagree with Goertzel about psy'.
Fair point re: "ever".
The extent to which it is fallacious depends rather strongly on what y and z (and even x) are, it seems to me.
Any argument of this nature needs to include some explanation of why someone's ability to think about y is linked to their ability to think about z. But even with that (which wasn't included in the comment) you can only conclude that y and z imply each other. You can't just conclude z.
In other words, you have to show Goertzel is wrong about psychic phenomenon before you can show that his belief in it is indicative of reasoning flaws elsewhere.
If someone is unable to examine the available evidence and come to a sane conclusion on a particular topic, this makes it less likely that they are able to examine the available evidence and to sane conclusions on other topics.
I don't take Goertzel seriously for the same reason I don't take young earth creationists seriously. It's not that I disagree with him, it's that his beliefs have almost no connection to reality.
(If it makes you feel better, I have read some of Goertzel's writing on AGI, and it's stuffed full of magical thinking.)
I'd be interested to hear more about that.
From what I've seen, the people who comment here who have read Broderick's book have come away, if not convinced psy describes some real physical phenomena, convinced that the case isn't at all open and shut the way young earth creationism is. When an issue is such that smart, sane people can disagree then you have to actually resolve the object level disagreement before you can use someone's beliefs on the issue in a general argument about their rationality. You can't just assume it as you do here.
I have to disagree that this "smugness" even remotely reaches the level that is characteristic of a cult.
As someone who has frequently expressed disagreement with the "doctrine" here, I have occasionally encountered both reactions that you mention. But those sporadic reactions are not much of a barrier to criticism - any critic who persists here will eventually be engaged intelligently and respectfully, assuming that the critic tries to achieve a modicum of respect and intelligence on his own part. Furthermore, if the critic really engages with what his interlocutors here are saying, he will receive enough upvotes to more than repair the initial damage to his karma
Yes. LessWrong is not in fact hidebound by groupthink. I have lots of disagreement with the standard LessWrong belief cluster, but I get upvotes if I bother to write well, explain my objections clearly and show with my reference links that I have some understanding of what I'm objecting to. So the moderation system - "vote up things you want more of" - works really well, and I like the comments here.
This has also helped me control my unfortunate case of asshole personality disorder elsewhere I see someone being wrong on the Internet. It's amazing what you can get away with if you show your references.
This would be easier to parse if you quoted the individual criteria you are evaluating right before the evaluation, eg:
I've not seen this happening - examples?
I think it would be more accurate to say that anyone who after reading the sequences still disagrees, but is unable to explain where they believe the sequences have gone wrong, is not worth arguing with.
With this qualification, it no longer seems like evidence of being cult.
That's the pejorative usage. There is also:
"Cult also commonly refers to highly devoted groups, as in:
Cult, a cohesive group of people devoted to beliefs or practices that the surrounding culture or society considers to be outside the mainstream
http://en.wikipedia.org/wiki/Cults_of_personality
http://en.wikipedia.org/wiki/Cult_following
http://en.wikipedia.org/wiki/Cult_%28religious_practice%29