thomblake comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: WrongBot 17 August 2010 06:37:37PM 2 points [-]

The wikipedia page on Cult Checklists includes seven independent sets of criteria for cult classification, provided by anti-cult activists who have strong incentives to cast as wide a net as possible. Singularitarianism, transhumanism, and cryonics fit none of those of lists. In most cases, it isn't even close.

Comment author: thomblake 17 August 2010 07:04:24PM *  12 points [-]

I disagree with your assessment. Let's just look at Lw for starters.

Eileen Barker:

  1. It would be hard to make a case for this one; a tendency to congregate geographically (many people joining the SIAI visiting fellows, and having meetups) is hardly cutting oneself off from others; however, there is certainly some tendency to cut ourselves off socially - note for example the many instances of folks worrying they will not be able to find a sufficiently "rationalist" significant other.
  2. Huge portions of the views of reality of many people here have been shaped by this community, and Eliezer's posts in particular; many of those people cannot understand the math or argumentation involved but trust Eliezer's conclusions nonetheless.
  3. Much like in 2 above, many people have chosen to sign up for cryonics based on advice from the likes of Eliezer and Robin; indeed, Eliezer has advised that anyone not smart enough to do the math should just trust him on this.
  4. Several us/them distinctions have been made and are not open for discussion. For example, theism is a common whipping-boy, and posts discussing the virtues of theism are generally not welcome.
  5. Nope. Though some would credit Eliezer with trying to become or create God.
  6. Obviously. Less Wrong is quite focused on rationality (though that should not be odd) and Eliezer is rather... driven in his own overarching goal.

Based on that, I think Eileen Barker's list would have us believe Lw is a likely cult.

Shirley Harrison:

  1. I'm not sure if 'from above' qualifies, but Eliezer thinks he has a special mission that he is uniquely qualified to fulfill.
  2. While 'revealed' is not necessarily accurate in some senses, the "Sequences" are quite long and anyone who tries to argue is told to "read the Sequences". Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.
  3. Nope
  4. Many people here develop feelings of superiority over their families and/or friends, and are asked to imagine a future where they are alienated from family and friends due to their not having signed up for cryonics.
  5. This one is questionable. But surely Eliezer is trying the advanced technique of sharing part of his power so that we will begin to see the world the way he does.
  6. There is volunteer effort at Lw, and posts on Lw are promoted to direct volunteer effort towards SIAI. Some of the effort of SIAI goes to paying Eliezer.
  7. No sign of this
  8. "Exclusivity - 'we are right and everyone else is wrong'". Very yes.

Based on that, I think Shirley Harrison's list would have us believe Lw is a likely cult.

Similar analysis using the other lists is left as an exercise for the reader.

Comment author: cousin_it 17 August 2010 07:55:41PM *  12 points [-]

That was... surprisingly surprising. Thank you.

For reasons like those you listed, and also out of some unverbalized frustration, in the last week I've been thinking pretty seriously whether I should leave LW and start hanging out somewhere else online. I'm not really interested in the Singularity, existential risks, cognitive biases, cryonics, un/Friendly AI, quantum physics or even decision theory. But I do like the quality of discussions here sometimes, and the mathematical interests of LW overlap a little with mine: people around here enjoy game theory and computability theory, though sadly not nearly as much as I do.

What other places on the Net are there for someone like me? Hacker News and Reddit look like dumbed-down versions of LW, so let's not talk about those. I solved a good bit of Project Euler once, the place is tremendously enjoyable but quite narrow-focused. The n-Category Cafe is, sadly, coming to a halt. Math Overflow looks wonderful and this question by Scott Aaronson nearly convinced me to drop everything and move there permanently. The Polymath blog is another fascinating place that is so high above LW that I feel completely underqualified to join. Unfortunately, none of these are really conducive to posting new results, and moving into academia IRL is not something I'd like to do (I've been there, thanks).

Any other links? Any advice? And please, please, nobody take this comment as a denigration of LW or a foot-stomping threat. I love you all.

Comment author: John_Baez 19 August 2010 07:58:44AM *  15 points [-]

My new blog "Azimuth" may not be mathy enough for you, but if you like the n-Category Cafe, it's possible you may like this one too. It's more focused on technology, environmental issues, and the future. Someday soon you'll see an interview with Eliezer! And at some point we'll probably get into decision theory as applied to real-world problems. We haven't yet.

(I don't think the n-Category Cafe is "coming to a halt", just slowing down - my change in interests means I'm posting a lot less there, and Urs Schreiber is spending most of his time developing the nLab.)

Comment author: cousin_it 19 August 2010 08:38:29AM *  3 points [-]

Wow.

Hello.

I didn't expect that. It feels like summoning Gauss, or something.

Thank you a lot for twf!

Comment author: Vladimir_Nesov 19 August 2010 04:23:32PM 2 points [-]
Comment author: ciphergoth 19 August 2010 08:02:22AM 0 points [-]

The markup syntax here is a bit unusual and annoying - click the "Help" button at the bottom right of the edit window to get guidance on how to include hyperlinks. Unlike every other hyperlinking system, the text goes first and the URL second!

Comment author: Kevin 19 August 2010 08:08:18AM 4 points [-]

Make a top level post about the kind of thing you want to talk about. It doesn't have to be an essay, it could just be a question ("Ask Less Wrong") or a suggested topic of conversation.

Comment author: David_Gerard 18 November 2010 09:20:45PM 1 point [-]

I love your posts, so having seen this comment I'm going to try to write up my nascent sequence on memetic colds, aka sucker shoots, just for you. (And everyone.)

Comment author: cousin_it 18 November 2010 11:24:41PM 1 point [-]

Thanks!

Comment author: DanielVarga 21 August 2010 08:22:55PM 1 point [-]

I'm not really interested in the Singularity, existential risks, cognitive biases, cryonics, un/Friendly AI, quantum physics or even decision theory. But I do like the quality of discussions here sometimes, and the mathematical interests of LW overlap a little with mine: people around here enjoy game theory and computability theory, though sadly not nearly as much as I do.

Same for me. My interests are more similar to your interests than to classic LW themes. There are probably many others here in the same situation. But I hope that the list of classic LW themes is not set in stone. I think people like us should try to broaden the spectrum of LW. If this attempt fails, please send me the address of the new place where you hang out online. :) But I am optimistic.

Comment author: [deleted] 21 August 2010 06:59:24PM 1 point [-]

"Leaving" LW is rather strong. Would that mean not posting? Not reading the posts, or the comments? Or just reading at a low enough frequency that you decouple your sense of identity from LW?

I've been trying to decide how best to pump new life into The Octagon section of the webcomic collective forum Koala Wallop. The Octagon started off when Dresden Codak was there, and became the place for intellectual discussion and debate. The density of math and computer theoretic enthusiasts is an order of magnitude lower than here or the other places you mentioned, and those who know such stuff well are LW lurkers or posters too. There was an overkill of politics on The Octagon, the levels of expertise on subjects are all over the spectrum, and it's been slowing down for a while, but I think a good push will revive it. The main thing is that it lives inside of a larger forum, which is a silly, fun sort of community. The subforum simply has a life of it's own.

Not that I claim any ownership over it, but:

I'm going to try to more clearly brand it as "A friendly place to analytically discuss fantastic, strange or bizarre ideas."

Comment author: Sniffnoy 18 August 2010 12:05:26AM 0 points [-]

Of course, MathOverflow isn't really a place for discussion...

Comment author: JoshuaZ 17 August 2010 08:05:19PM 0 points [-]

At least as far as math is concerned, people not in academia can publish papers. As for the Polymath blog, I'd actually estimate that you are at about the level of most Polymath contributors, although most of the impressive work there seems to be done by a small fraction of the people there.

Comment author: cousin_it 17 August 2010 08:14:36PM *  2 points [-]

About Polymath: thanks! (blushes)

I have no fetish for publishing papers or having an impressive CV or whatever. The important things, for me, are these: I want to have meaningful discussions about my areas of interest, and I want my results to be useful to somebody. I have received more than a fair share of "thank yous" here on LW for clearing up mathy stuff, but it feels like I could be more useful... somewhere.

Comment author: Zvi 31 August 2010 09:01:09PM 5 points [-]

I found this amusing because by those standards, cults are everywhere. For example, I run a professional Magic: The Gathering team and am pretty sure I'm not a cult leader. Although that does sound kind of neat. Observe:

Eileen Barker: 1. When events are close we spend a lot of time socially seperate from others so as to develop and protect our research. On occasion 'Magic colonies' form for a few weeks. It's not substantially less isolating than what SIAI dos. Check. 2. I have imparted huge amounts of belief about a large subset of our world, albeit a smaller one than Eliezer is working on. Partial Check. 3. I make reasonably import, on the level of the Cryonics decision if Cryonics isn't worthwhile, decisions for my teammates and do what I need to do to make sure they follow them far more than they would without me. Check. 4. We identify other teams as 'them' reasonably often, and certain other groups are certainly viewed as the enemy. Check. 5. Nope, even fainter argument than Eliezer. 6. Again, yes, obviously.

Shirley Harrison: 1. I claim a special mission that I am uniquely qualified to fufill. Not as important of one, but still. Check. 2. My writings count at least as much as the sequences. Check. 3. Not intentionally, but often new recruits have little idea what to expect. Check plus. 4. Totalitarian rules structure, and those who game too much often alienate friends and family. I've seen it many times, and far less of a cheat than saying that you'll be alienated from them when they are all dead and you're not because you got frozen. Check. 5. I make people believe what I want with the exact same techniques we use here. If anything, I'm willing to use slightly darker arts. Check. 6. We make the lower level people do the grunt work, sure. Check. 7. Based on some of the deals I've made, one looking to demonize could make a weak claim. Check plus. 8. Exclusivity. In spades. Check.

I'd also note that the exercise left to the reader is much harder, because the other checklists are far harder to fudge.

Comment author: WrongBot 17 August 2010 08:25:15PM *  14 points [-]

On Eileen Barker:

Much like in 2 above, many people have chosen to sign up for cryonics based on advice from the likes of Eliezer and Robin; indeed, Eliezer has advised that anyone not smart enough to do the math should just trust him on this.

I believe that most LW posters are not signed up for cryonics (myself included), and there is substantial disagreement about whether it's a good idea. And that disagreement has been well received by the "cult", judging by the karma scores involved.

Several us/them distinctions have been made and are not open for discussion. For example, theism is a common whipping-boy, and posts discussing the virtues of theism are generally not welcome.

Theism has been discussed. It is wrong. But Robert Aumann's work is still considered very important; theists are hardly dismissed as "satanic," to use Barker's word.

Of Barker's criteria, 2-4 of 6 apply to the LessWrong community, and only one ("Leaders and movements who are unequivocally focused on achieving a certain goal") applies strongly.


On Shirley Harrison:

I'm not sure if 'from above' qualifies, but Eliezer thinks he has a special mission that he is uniquely qualified to fulfill.

I can't speak for Eliezer, but I suspect that if there were a person who was obviously more qualified than him to tackle some aspect of FAI, he would acknowledge it and welcome their contributions.

While 'revealed' is not necessarily accurate in some senses, the "Sequences" are quite long and anyone who tries to argue is told to "read the Sequences". Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.

No. The sequences are not infallible, they have never been claimed as such, and intelligent disagreement is generally well received.

Many people here develop feelings of superiority over their families and/or friends, and are asked to imagine a future where they are alienated from family and friends due to their not having signed up for cryonics.

What you describe is a prosperous exaggeration, not "[t]otalitarianism and alienation of members from their families and/or friends."

There is volunteer effort at Lw, and posts on Lw are promoted to direct volunteer effort towards SIAI. Some of the effort of SIAI goes to paying Eliezer.

Any person who promotes a charity at which they work is pushing a cult, by this interpretation. Eliezer isn't "lining his own pockets"; if someone digs up the numbers, I'll donate $50 to a charity of your choice if it turns out that SIAI pays him a salary disproportionally greater (2 sigmas?) than the average for researchers at comparable non-profits.

So that's 2-6 of Harrison's checklist items for LessWrong, none of them particularly strong.

My filters would drop LessWrong in the "probably not a cult" category, based off of those two standards.

Comment author: gwern 18 November 2010 06:29:41PM *  6 points [-]

Eliezer was compensated $88,610 in 2008 according to the Form 990 filed with the IRS and which I downloaded from GuideStar.

Wikipedia tells me that the median 2009 income in Redwood where Eliezer lives is $69,000.

(If you are curious, Tyler Emerson in Sunnyvale (median income 88.2k) makes 60k; Susan Fonseca-Klein also in Redwood was paid 37k. Total employee expenses is 200k, but the three salaries are 185k; I don't know what accounts for the difference. The form doesn't seem to say.)

Comment author: Sniffnoy 18 August 2010 12:04:10AM 3 points [-]

No. The sequences are not infallible, they have never been claimed as such, and intelligent disagreement is generally well received.

In particular, there seems to be a lot of disagreement about the metaethics sequence, and to a lesser extent about timeless physics.

Comment author: Jack 18 November 2010 08:23:06PM 3 points [-]

I can't speak for Eliezer, but I suspect that if there were a person who was obviously more qualified than him to tackle some aspect of FAI, he would acknowledge it and welcome their contributions.

What exactly are Eliezer's qualifications supposed to be?

Comment author: jimrandomh 18 November 2010 08:38:20PM 2 points [-]

What exactly are Eliezer's qualifications supposed to be?

You mean, "What are Eliezer's qualifications?" Phrasing it that way makes it sound like a rhetorical attack rather than a question.

To answer the question itself: lots of time spent thinking and writing about it, and some influential publications on the subject.

Comment author: Jack 18 November 2010 09:44:05PM *  7 points [-]

I'm definitely not trying to attack anyone (and you're right my comment could be read that way). But I'm also not just curious. I figured this was the answer. Lots of time spent thinking, writing and producing influential publications on FAI is about all the qualifications one can reasonably expect (producing a provable mathematical formalization of friendliness is the kind of thing no one is qualified to do before they do it and the AI field in general is relatively new and small). And Eliezer is obviously a really smart guy. He's probably even the most likely person to solve it. But the effort to address the friendliness issue seems way too focused on him and the people around him. You shouldn't expect any one person to solve a Hard problem. Insight isn't that predictable especially when no one in the field has solved comparable problems before. Maybe Einstein was the best bet to formulate a unified field theory but a) he never did and b) he had actually had comparable insights in the past. Part of the focus on Eliezer is just an institutional and financial thing, but he and a lot of people here seem to encourage this state of affairs.

No one looks at open problems in other fields this way.

Comment author: Vladimir_Nesov 18 November 2010 10:09:41PM *  5 points [-]

No one looks at open problems in other fields this way.

Yes, the situation isn't normal or good. But this isn't a balanced comparison, since we don't currently have a field, too few people understand the problem and had seriously thought about it. This gradually changes, and I expect will be visibly less of a problem in another 10 years.

Comment author: Jack 18 November 2010 10:15:17PM 0 points [-]

I may have an incorrect impression, but SIAI or at least Eliezer's department seems to have a self-image comparable to the Manhattan project rather than early pioneers of a scientific field.

Comment author: multifoliaterose 18 November 2010 11:26:50PM *  2 points [-]

Eliezer's past remarks seem to have pointed to a self-image comparable to the Manhatten project. However, according the new SIAI Overview:

We aim to seed the above research programs. We are too small to carry out all the needed research ourselves, but we can get the ball rolling.

Comment author: ata 18 November 2010 10:43:58PM *  1 point [-]

I may have an incorrect impression, but SIAI or at least Eliezer's department seems to have a self-image comparable to the Manhattan project

Eliezer has said: "I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me." Your call as to whether you believe that. (The rest of that post, and some of his other posts in that discussion, address some points similar to those that you raised.)

That said, "self-image comparable to the Manhattan project" is an unusually generous ascription of humility to SIAI and Eliezer. :P

Comment author: JGWeissman 18 November 2010 10:33:45PM 1 point [-]

They want to become comparable to the Manhattan project, in part by recruiting additional FAI researchers. They do not claim to be at that stage now.

Comment author: XiXiDu 19 November 2010 12:57:25PM 1 point [-]

...producing a provable mathematical formalization of friendliness [...] And Eliezer is obviously a really smart guy. He's probably even the most likely person to solve it.

I haven't seen any proves of his math skills that would justify this statement. By what evidence have you arrived at the conclusion that he can do it at all, even approach it? The sequences and the SIAI publications certainly show that he was able to compile a bunch of existing ideas into a coherent framework of rationality, yet there is not much novelty to be found anywhere.

Comment author: Jack 19 November 2010 01:04:59PM 3 points [-]

Which statement are you talking about? Saying someone is the most likely person to do something is not the same as saying they are likely to do it. You haven't said anything in this comment than I disagree with so I don't understand what we're disputing.

Comment author: multifoliaterose 18 November 2010 11:27:15PM 0 points [-]

Great comment.

Comment author: XiXiDu 18 November 2010 09:03:27PM *  0 points [-]

To answer the question itself: lots of time spent thinking and writing about it, and some influential publications on the subject.

How influential are his publications if they could not convince Ben Goertzel (SIAI/AGI researcher), someone who has read Yudkowsky's publications and all of the LW sequences? You could argue that he and other people don't have the smarts to grasp Yudkowsky's arguments, but who does? Either Yudkowsky is so smart that some academics are unable to appreciate his work or there is another problem. How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?

The problem here is that telling someone that Yudkowsky spent a lot of time thinking and writing about something is not a qualification. Further it does not guarantee that he would acknowledge and welcome the contributions of others who disagree.

Comment author: jimrandomh 18 November 2010 09:36:41PM *  5 points [-]

The motivated cognition here is pretty thick. Writing is influential when many people are influenced by it. It doesn't have to be free of people who disagree with it to be influential, and it doesn't even have to be correct.

How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?

Level up first. I can't evaluate physics research, so I just accept that I can't tell which of it is correct; I don't try to figure it out from the politics of physicists arguing with each other, because that doesn't work.

Comment author: XiXiDu 19 November 2010 09:56:32AM *  1 point [-]

Level up first. I can't evaluate physics research, so I just accept that I can't tell which of it is correct; I don't try to figure it out from the politics of physicists arguing with each other, because that doesn't work.

But what does this mean regarding my support of the SIAI? Imagine I was a politician who had no time to level up first but who had to decide if they allowed for some particle accelerator or AGI project to be financed at all or go ahead with full support and without further security measures.

Would you tell a politician to go and read the sequences and if, after reading the publications, they don't see why AGI research is as dangerous as being portrait by the SIAI they should just forget about it and stop trying to figure out what to do? Or do you simply tell them to trust a fringe group which does predict that a given particle accelerator might destroy the world when all experts claim there is no risk?

Writing is influential when many people are influenced by it.

You talked about Yudkowsky's influential publications. I thought you meant some academic papers, not the LW sequences. They indeed influenced some people, yet I don't think they influenced the right people.

Comment author: multifoliaterose 18 November 2010 11:36:41PM *  -1 points [-]

Downvoted for this:

The motivated cognition here is pretty thick

Your interpretation seems uncharitable. I find it unlikely that you have enough information to make a confident judgment that XiXiDu's comment is born of motivated cognition to a greater extent than your own comments.

Moreover, I believe that even when such statements are true, one should avoid making them when possible as they're easily construed as personal attacks which tend to spawn an emotional reaction in one's conversation partners pushing them into an Arguments as soldiers mode which is detrimental to rational discourse.

Comment author: shokwave 23 November 2010 08:13:06AM 0 points [-]

Moreover, I believe that even when such statements are true, one should avoid making them when possible

Strongly disagree. To improve, you need to know where to improve, and if people avoid telling you when and where you're going wrong, you won't improve.

as they're easily construed as personal attacks which tend to spawn an emotional reaction in one's conversation partners

On this blog, any conversational partners should definitely not be construing anything as personal attacks.

pushing them into an arguments as soldiers mode which is detrimental to rational discourse.

On this blog, any person should definitely be resisting this push.

Comment author: multifoliaterose 23 November 2010 08:28:08AM 1 point [-]

Strongly disagree. To improve, you need to know where to improve, and if people avoid telling you when and where you're going wrong, you won't improve.

I did not say that one should avoid telling people when and where they're going wrong. I was objecting to the practice of questioning people's motivations. For the most part I don't think that questioning somebody's motivations is helpful to him or her.

On this blog, any conversational partners should definitely not be construing anything as personal attacks.

I disagree. Sometimes commentators make statements which are pretty clearly intended to be personal attacks and it would be epistemically irrational to believe otherwise. Just because the blog is labeled as being devoted to the art of refining rationality doesn't mean that the commentators are always above this sort of thing.

I agree with you insofar as I think that one work to interpret comments charitably.

On this blog, any person should definitely be resisting this push.

I agree, but this is not relevant to the question of whether one should be avoiding exerting such a push in the first place.

Comment author: WrongBot 18 November 2010 10:58:01PM 3 points [-]

Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer's stance.

For what it's worth, Eliezer has been influential/persuasive enough to get the SIAI created and funded despite having absolutely no academic qualifications. He's also responsible for coining "Seed AI".

Comment author: XiXiDu 19 November 2010 10:04:01AM 3 points [-]

Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer's stance.

Indeed, I was just trying to figure out how someone with money or power who wants to know what is the right thing to do but who does not have the smarts should do. Someone like a politician or billionaire who would either like to support some AGI research or the SIAI. How are they going to decide what to do if all AGI experts tell them that there is no risk from AGI research and that the SIAI is a cult when at the same time the SIAI tells them the AGI experts are intellectual impotent and the SIAI is the only hope for humanity to survive the AI revolution. What should someone who does not have the expertise or smarts to estimate those claims, but who nevertheless has to decide how to use his power, should do? I believe this is not an unrealistic scenario as many rich or powerful people want to do the right thing, yet do not have the smarts to see why they should trust Yudkowsky instead of hundreds of experts.

Comment author: XiXiDu 19 November 2010 10:15:20AM *  1 point [-]

For what it's worth, Eliezer has been influential/persuasive enough to get the SIAI created and funded despite having absolutely no academic qualifications. He's also responsible for coining "Seed AI".

Interesting, when did he come up with the concept of "Seed AI". Because it is mentioned in Karl Schroeder's Ventus (Tor Books, 2000.) ISBN 978-0312871970.

Comment author: Risto_Saarelma 19 November 2010 12:11:31PM *  1 point [-]

Didn't find the phrase "Seed AI" there. One plot element is a "resurrection seed", which is created by an existing, mature evil AI to grow itself back together in case it's main manifestation is destroyed. A Seed AI is a different concept, it's something the pre-AI engineers put together that grows into a superhuman AI by rewriting itself more and more powerful. A Seed AI is specifically a method to get to AGI from not having one, not just an AI that grows from a seed-like thing. I don't remember recursive self-improvement being mentioned with the seed in Ventus.

A precursor concept where the initial AI bootstraps itself by merely learning things, not necessarily by rewriting it's own architecture, goes all the way back to Alan Turing's 1950 paper on machine intelligence.

Comment author: XiXiDu 19 November 2010 12:36:58PM *  1 point [-]

Here is a quote from Ventus:

Look at it this way. Once long ago two kinds of work converged. We'd figured out how to make machines that could make more machines. And we'd figured out how to get machines to... not exactly think, but do something very much like it. So one day some people built a machine which knew how to build a machine smarter than itself. That built another, and that another, and soon they were building stuff the men who made the first machine didn't even recognize.

[...]

And, some of the mechal things kept developing, with tremendous speed, and became more subtle than life. Smarter than humans. Conscious of more. And, sometimes, more ambitious. We had little choice but to label them gods after we saw what they could do--namely, anything.

Comment author: XiXiDu 19 November 2010 12:34:30PM 0 points [-]

They did not command the wealth of nations, these researchers. Although their grants amounted to millions of Euros, they could never have funded a deep-space mission on their own, nor could they have built the giant machineries they conceived of. In order to achieve their dream, they built their prototypes only in computer simulation, and paid to have a commercial power satellite boost the Wind seeds to a fraction of light speed. [...] no one expected the Winds to bloom and grow the way they ultimately did.

It is further explained that the Winds were designed to evolve on their own so they are not mere puppets of human intentions but possess their own intrinsic architecture.

In other places in the book it is explained how humans did not create their AI Gods but that they evolved themselves from seeds designed by humans.

Comment author: Jack 19 November 2010 12:18:50PM 0 points [-]

Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer's stance.

I don't think the failure of someone to be convinced of some position is ever strong evidence against that position. But this argument here is genuinely terrible. I disagree with person x about y, therefore person x is wrong about z? Do we even have to go into why this is fallacious?

Comment author: wedrifid 19 November 2010 06:06:36PM *  2 points [-]

I don't think the failure of someone to be convinced of some position is ever strong evidence against that position.

Ever is a strong word. If a competent expert in a field who has a known tendency to err slightly on the side of too much openness to the cutting edge fails to be convinced by a new finding within his field that says an awful lot.

I disagree with person x about y, therefore person x is wrong about z? Do we even have to go into why this is fallacious?

That is simply not the form of the argument you quote. "Ben Goertzel believes in psychic phenomenon" can not be represented as "I disagree with person x ".

Comment author: Jack 19 November 2010 06:16:05PM *  0 points [-]

That is simply not the form of the argument you quote. "Ben Goertzel believes in psychic phenomenon" can not be represented as "I disagree with person x ".

I'm being generous and giving the original comment credit for an implicit premise. As stated the argument is "Person x believes y, therefore person x is wrong about z." this is so obviously wrong it makes my head hurt. WrongBot's point is that someone has to have a poor reasoning capacity to believe in psy. But since he didn't provide any evidence to that effect it reduces to 'I disagree with Goertzel about psy'.

Fair point re: "ever".

Comment author: komponisto 19 November 2010 01:05:22PM 2 points [-]

I disagree with person x about y, therefore person x is wrong about z? Do we even have to go into why this is fallacious?

The extent to which it is fallacious depends rather strongly on what y and z (and even x) are, it seems to me.

Comment author: Jack 19 November 2010 01:11:13PM *  0 points [-]

Any argument of this nature needs to include some explanation of why someone's ability to think about y is linked to their ability to think about z. But even with that (which wasn't included in the comment) you can only conclude that y and z imply each other. You can't just conclude z.

In other words, you have to show Goertzel is wrong about psychic phenomenon before you can show that his belief in it is indicative of reasoning flaws elsewhere.

Comment author: WrongBot 19 November 2010 05:56:48PM 3 points [-]

If someone is unable to examine the available evidence and come to a sane conclusion on a particular topic, this makes it less likely that they are able to examine the available evidence and to sane conclusions on other topics.

I don't take Goertzel seriously for the same reason I don't take young earth creationists seriously. It's not that I disagree with him, it's that his beliefs have almost no connection to reality.

(If it makes you feel better, I have read some of Goertzel's writing on AGI, and it's stuffed full of magical thinking.)

Comment author: ata 19 November 2010 06:28:08PM 5 points [-]

(If it makes you feel better, I have read some of Goertzel's writing on AGI, and it's stuffed full of magical thinking.)

I'd be interested to hear more about that.

Comment author: Jack 19 November 2010 06:28:38PM 1 point [-]

I don't take Goertzel seriously for the same reason I don't take young earth creationists seriously. It's not that I disagree with him, it's that his beliefs have almost no connection to reality.

From what I've seen, the people who comment here who have read Broderick's book have come away, if not convinced psy describes some real physical phenomena, convinced that the case isn't at all open and shut the way young earth creationism is. When an issue is such that smart, sane people can disagree then you have to actually resolve the object level disagreement before you can use someone's beliefs on the issue in a general argument about their rationality. You can't just assume it as you do here.

Comment author: Perplexed 18 November 2010 07:10:36PM *  2 points [-]

the "Sequences" are quite long and anyone who tries to argue is told to "read the Sequences". Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.

I have to disagree that this "smugness" even remotely reaches the level that is characteristic of a cult.

As someone who has frequently expressed disagreement with the "doctrine" here, I have occasionally encountered both reactions that you mention. But those sporadic reactions are not much of a barrier to criticism - any critic who persists here will eventually be engaged intelligently and respectfully, assuming that the critic tries to achieve a modicum of respect and intelligence on his own part. Furthermore, if the critic really engages with what his interlocutors here are saying, he will receive enough upvotes to more than repair the initial damage to his karma

Comment author: David_Gerard 18 November 2010 09:16:46PM *  2 points [-]

Yes. LessWrong is not in fact hidebound by groupthink. I have lots of disagreement with the standard LessWrong belief cluster, but I get upvotes if I bother to write well, explain my objections clearly and show with my reference links that I have some understanding of what I'm objecting to. So the moderation system - "vote up things you want more of" - works really well, and I like the comments here.

This has also helped me control my unfortunate case of asshole personality disorder elsewhere I see someone being wrong on the Internet. It's amazing what you can get away with if you show your references.

Comment author: JGWeissman 17 August 2010 07:34:06PM *  2 points [-]

This would be easier to parse if you quoted the individual criteria you are evaluating right before the evaluation, eg:

1.

A movement that separates itself from society, either geographically or socially;

It would be hard to make a case for this one; a tendency to congregate geographically (many people joining the SIAI visiting fellows, and having meetups) is hardly cutting oneself off from others; however, there is certainly some tendency to cut ourselves off socially - note for example the many instances of folks worrying they will not be able to find a sufficiently "rationalist" significant other.

Comment author: ciphergoth 17 August 2010 07:34:39PM 2 points [-]

Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.

I've not seen this happening - examples?

Comment author: JGWeissman 17 August 2010 07:43:08PM 7 points [-]

I think it would be more accurate to say that anyone who after reading the sequences still disagrees, but is unable to explain where they believe the sequences have gone wrong, is not worth arguing with.

With this qualification, it no longer seems like evidence of being cult.