WrongBot comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: thomblake 17 August 2010 07:04:24PM *  12 points [-]

I disagree with your assessment. Let's just look at Lw for starters.

Eileen Barker:

  1. It would be hard to make a case for this one; a tendency to congregate geographically (many people joining the SIAI visiting fellows, and having meetups) is hardly cutting oneself off from others; however, there is certainly some tendency to cut ourselves off socially - note for example the many instances of folks worrying they will not be able to find a sufficiently "rationalist" significant other.
  2. Huge portions of the views of reality of many people here have been shaped by this community, and Eliezer's posts in particular; many of those people cannot understand the math or argumentation involved but trust Eliezer's conclusions nonetheless.
  3. Much like in 2 above, many people have chosen to sign up for cryonics based on advice from the likes of Eliezer and Robin; indeed, Eliezer has advised that anyone not smart enough to do the math should just trust him on this.
  4. Several us/them distinctions have been made and are not open for discussion. For example, theism is a common whipping-boy, and posts discussing the virtues of theism are generally not welcome.
  5. Nope. Though some would credit Eliezer with trying to become or create God.
  6. Obviously. Less Wrong is quite focused on rationality (though that should not be odd) and Eliezer is rather... driven in his own overarching goal.

Based on that, I think Eileen Barker's list would have us believe Lw is a likely cult.

Shirley Harrison:

  1. I'm not sure if 'from above' qualifies, but Eliezer thinks he has a special mission that he is uniquely qualified to fulfill.
  2. While 'revealed' is not necessarily accurate in some senses, the "Sequences" are quite long and anyone who tries to argue is told to "read the Sequences". Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.
  3. Nope
  4. Many people here develop feelings of superiority over their families and/or friends, and are asked to imagine a future where they are alienated from family and friends due to their not having signed up for cryonics.
  5. This one is questionable. But surely Eliezer is trying the advanced technique of sharing part of his power so that we will begin to see the world the way he does.
  6. There is volunteer effort at Lw, and posts on Lw are promoted to direct volunteer effort towards SIAI. Some of the effort of SIAI goes to paying Eliezer.
  7. No sign of this
  8. "Exclusivity - 'we are right and everyone else is wrong'". Very yes.

Based on that, I think Shirley Harrison's list would have us believe Lw is a likely cult.

Similar analysis using the other lists is left as an exercise for the reader.

Comment author: WrongBot 17 August 2010 08:25:15PM *  14 points [-]

On Eileen Barker:

Much like in 2 above, many people have chosen to sign up for cryonics based on advice from the likes of Eliezer and Robin; indeed, Eliezer has advised that anyone not smart enough to do the math should just trust him on this.

I believe that most LW posters are not signed up for cryonics (myself included), and there is substantial disagreement about whether it's a good idea. And that disagreement has been well received by the "cult", judging by the karma scores involved.

Several us/them distinctions have been made and are not open for discussion. For example, theism is a common whipping-boy, and posts discussing the virtues of theism are generally not welcome.

Theism has been discussed. It is wrong. But Robert Aumann's work is still considered very important; theists are hardly dismissed as "satanic," to use Barker's word.

Of Barker's criteria, 2-4 of 6 apply to the LessWrong community, and only one ("Leaders and movements who are unequivocally focused on achieving a certain goal") applies strongly.


On Shirley Harrison:

I'm not sure if 'from above' qualifies, but Eliezer thinks he has a special mission that he is uniquely qualified to fulfill.

I can't speak for Eliezer, but I suspect that if there were a person who was obviously more qualified than him to tackle some aspect of FAI, he would acknowledge it and welcome their contributions.

While 'revealed' is not necessarily accurate in some senses, the "Sequences" are quite long and anyone who tries to argue is told to "read the Sequences". Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.

No. The sequences are not infallible, they have never been claimed as such, and intelligent disagreement is generally well received.

Many people here develop feelings of superiority over their families and/or friends, and are asked to imagine a future where they are alienated from family and friends due to their not having signed up for cryonics.

What you describe is a prosperous exaggeration, not "[t]otalitarianism and alienation of members from their families and/or friends."

There is volunteer effort at Lw, and posts on Lw are promoted to direct volunteer effort towards SIAI. Some of the effort of SIAI goes to paying Eliezer.

Any person who promotes a charity at which they work is pushing a cult, by this interpretation. Eliezer isn't "lining his own pockets"; if someone digs up the numbers, I'll donate $50 to a charity of your choice if it turns out that SIAI pays him a salary disproportionally greater (2 sigmas?) than the average for researchers at comparable non-profits.

So that's 2-6 of Harrison's checklist items for LessWrong, none of them particularly strong.

My filters would drop LessWrong in the "probably not a cult" category, based off of those two standards.

Comment author: gwern 18 November 2010 06:29:41PM *  6 points [-]

Eliezer was compensated $88,610 in 2008 according to the Form 990 filed with the IRS and which I downloaded from GuideStar.

Wikipedia tells me that the median 2009 income in Redwood where Eliezer lives is $69,000.

(If you are curious, Tyler Emerson in Sunnyvale (median income 88.2k) makes 60k; Susan Fonseca-Klein also in Redwood was paid 37k. Total employee expenses is 200k, but the three salaries are 185k; I don't know what accounts for the difference. The form doesn't seem to say.)

Comment author: Sniffnoy 18 August 2010 12:04:10AM 3 points [-]

No. The sequences are not infallible, they have never been claimed as such, and intelligent disagreement is generally well received.

In particular, there seems to be a lot of disagreement about the metaethics sequence, and to a lesser extent about timeless physics.

Comment author: Jack 18 November 2010 08:23:06PM 3 points [-]

I can't speak for Eliezer, but I suspect that if there were a person who was obviously more qualified than him to tackle some aspect of FAI, he would acknowledge it and welcome their contributions.

What exactly are Eliezer's qualifications supposed to be?

Comment author: jimrandomh 18 November 2010 08:38:20PM 2 points [-]

What exactly are Eliezer's qualifications supposed to be?

You mean, "What are Eliezer's qualifications?" Phrasing it that way makes it sound like a rhetorical attack rather than a question.

To answer the question itself: lots of time spent thinking and writing about it, and some influential publications on the subject.

Comment author: Jack 18 November 2010 09:44:05PM *  7 points [-]

I'm definitely not trying to attack anyone (and you're right my comment could be read that way). But I'm also not just curious. I figured this was the answer. Lots of time spent thinking, writing and producing influential publications on FAI is about all the qualifications one can reasonably expect (producing a provable mathematical formalization of friendliness is the kind of thing no one is qualified to do before they do it and the AI field in general is relatively new and small). And Eliezer is obviously a really smart guy. He's probably even the most likely person to solve it. But the effort to address the friendliness issue seems way too focused on him and the people around him. You shouldn't expect any one person to solve a Hard problem. Insight isn't that predictable especially when no one in the field has solved comparable problems before. Maybe Einstein was the best bet to formulate a unified field theory but a) he never did and b) he had actually had comparable insights in the past. Part of the focus on Eliezer is just an institutional and financial thing, but he and a lot of people here seem to encourage this state of affairs.

No one looks at open problems in other fields this way.

Comment author: Vladimir_Nesov 18 November 2010 10:09:41PM *  5 points [-]

No one looks at open problems in other fields this way.

Yes, the situation isn't normal or good. But this isn't a balanced comparison, since we don't currently have a field, too few people understand the problem and had seriously thought about it. This gradually changes, and I expect will be visibly less of a problem in another 10 years.

Comment author: Jack 18 November 2010 10:15:17PM 0 points [-]

I may have an incorrect impression, but SIAI or at least Eliezer's department seems to have a self-image comparable to the Manhattan project rather than early pioneers of a scientific field.

Comment author: multifoliaterose 18 November 2010 11:26:50PM *  2 points [-]

Eliezer's past remarks seem to have pointed to a self-image comparable to the Manhatten project. However, according the new SIAI Overview:

We aim to seed the above research programs. We are too small to carry out all the needed research ourselves, but we can get the ball rolling.

Comment author: ata 18 November 2010 10:43:58PM *  1 point [-]

I may have an incorrect impression, but SIAI or at least Eliezer's department seems to have a self-image comparable to the Manhattan project

Eliezer has said: "I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me." Your call as to whether you believe that. (The rest of that post, and some of his other posts in that discussion, address some points similar to those that you raised.)

That said, "self-image comparable to the Manhattan project" is an unusually generous ascription of humility to SIAI and Eliezer. :P

Comment author: JGWeissman 18 November 2010 10:33:45PM 1 point [-]

They want to become comparable to the Manhattan project, in part by recruiting additional FAI researchers. They do not claim to be at that stage now.

Comment author: XiXiDu 19 November 2010 12:57:25PM 1 point [-]

...producing a provable mathematical formalization of friendliness [...] And Eliezer is obviously a really smart guy. He's probably even the most likely person to solve it.

I haven't seen any proves of his math skills that would justify this statement. By what evidence have you arrived at the conclusion that he can do it at all, even approach it? The sequences and the SIAI publications certainly show that he was able to compile a bunch of existing ideas into a coherent framework of rationality, yet there is not much novelty to be found anywhere.

Comment author: Jack 19 November 2010 01:04:59PM 3 points [-]

Which statement are you talking about? Saying someone is the most likely person to do something is not the same as saying they are likely to do it. You haven't said anything in this comment than I disagree with so I don't understand what we're disputing.

Comment author: multifoliaterose 18 November 2010 11:27:15PM 0 points [-]

Great comment.

Comment author: XiXiDu 18 November 2010 09:03:27PM *  0 points [-]

To answer the question itself: lots of time spent thinking and writing about it, and some influential publications on the subject.

How influential are his publications if they could not convince Ben Goertzel (SIAI/AGI researcher), someone who has read Yudkowsky's publications and all of the LW sequences? You could argue that he and other people don't have the smarts to grasp Yudkowsky's arguments, but who does? Either Yudkowsky is so smart that some academics are unable to appreciate his work or there is another problem. How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?

The problem here is that telling someone that Yudkowsky spent a lot of time thinking and writing about something is not a qualification. Further it does not guarantee that he would acknowledge and welcome the contributions of others who disagree.

Comment author: jimrandomh 18 November 2010 09:36:41PM *  5 points [-]

The motivated cognition here is pretty thick. Writing is influential when many people are influenced by it. It doesn't have to be free of people who disagree with it to be influential, and it doesn't even have to be correct.

How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?

Level up first. I can't evaluate physics research, so I just accept that I can't tell which of it is correct; I don't try to figure it out from the politics of physicists arguing with each other, because that doesn't work.

Comment author: XiXiDu 19 November 2010 09:56:32AM *  1 point [-]

Level up first. I can't evaluate physics research, so I just accept that I can't tell which of it is correct; I don't try to figure it out from the politics of physicists arguing with each other, because that doesn't work.

But what does this mean regarding my support of the SIAI? Imagine I was a politician who had no time to level up first but who had to decide if they allowed for some particle accelerator or AGI project to be financed at all or go ahead with full support and without further security measures.

Would you tell a politician to go and read the sequences and if, after reading the publications, they don't see why AGI research is as dangerous as being portrait by the SIAI they should just forget about it and stop trying to figure out what to do? Or do you simply tell them to trust a fringe group which does predict that a given particle accelerator might destroy the world when all experts claim there is no risk?

Writing is influential when many people are influenced by it.

You talked about Yudkowsky's influential publications. I thought you meant some academic papers, not the LW sequences. They indeed influenced some people, yet I don't think they influenced the right people.

Comment author: multifoliaterose 18 November 2010 11:36:41PM *  -1 points [-]

Downvoted for this:

The motivated cognition here is pretty thick

Your interpretation seems uncharitable. I find it unlikely that you have enough information to make a confident judgment that XiXiDu's comment is born of motivated cognition to a greater extent than your own comments.

Moreover, I believe that even when such statements are true, one should avoid making them when possible as they're easily construed as personal attacks which tend to spawn an emotional reaction in one's conversation partners pushing them into an Arguments as soldiers mode which is detrimental to rational discourse.

Comment author: shokwave 23 November 2010 08:13:06AM 0 points [-]

Moreover, I believe that even when such statements are true, one should avoid making them when possible

Strongly disagree. To improve, you need to know where to improve, and if people avoid telling you when and where you're going wrong, you won't improve.

as they're easily construed as personal attacks which tend to spawn an emotional reaction in one's conversation partners

On this blog, any conversational partners should definitely not be construing anything as personal attacks.

pushing them into an arguments as soldiers mode which is detrimental to rational discourse.

On this blog, any person should definitely be resisting this push.

Comment author: multifoliaterose 23 November 2010 08:28:08AM 1 point [-]

Strongly disagree. To improve, you need to know where to improve, and if people avoid telling you when and where you're going wrong, you won't improve.

I did not say that one should avoid telling people when and where they're going wrong. I was objecting to the practice of questioning people's motivations. For the most part I don't think that questioning somebody's motivations is helpful to him or her.

On this blog, any conversational partners should definitely not be construing anything as personal attacks.

I disagree. Sometimes commentators make statements which are pretty clearly intended to be personal attacks and it would be epistemically irrational to believe otherwise. Just because the blog is labeled as being devoted to the art of refining rationality doesn't mean that the commentators are always above this sort of thing.

I agree with you insofar as I think that one work to interpret comments charitably.

On this blog, any person should definitely be resisting this push.

I agree, but this is not relevant to the question of whether one should be avoiding exerting such a push in the first place.

Comment author: shokwave 23 November 2010 09:16:55AM 1 point [-]

I was objecting to the practice of questioning people's motivations.

Not questioning their motivations; you objected to the practice of pointing out motivated cognition:

I find it unlikely that you have enough information to make a confident judgment that XiXiDu's comment is born of motivated cognition ... Moreover, I believe that even when such statements are true, one should avoid making them when possible

Pointing out that someone hasn't thought through the issue because they are motivated not to - this is not an attack on their motivations; it is an attack on their not having thought through the issue. Allowing people to keep their motivated cognitions out of respect for their motivations is wrong, because it doesn't let them know that they have something wrong, and they miss a chance to improve it.

Sometimes commentators make statements which are pretty clearly intended to be personal attacks and it would be epistemically irrational to believe otherwise.

To paraphrase steven, if you're interested in winning disputes you should dismiss personal attacks, but if you're interested in the truth you should dig through their personal attacks for any possible actual arguments. Whether or not it's a personal attack, you ought to construe it as if it is not, in order to maximise your chances of finding truth.

this is not relevant to the question of whether one should be avoiding exerting such a push in the first place.

Agreed. I think the first two parts of our comments address whether one should exert such a push. I think you're right, and this whole third part of our discussion is irrelevant.

Comment author: WrongBot 18 November 2010 10:58:01PM 3 points [-]

Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer's stance.

For what it's worth, Eliezer has been influential/persuasive enough to get the SIAI created and funded despite having absolutely no academic qualifications. He's also responsible for coining "Seed AI".

Comment author: XiXiDu 19 November 2010 10:04:01AM 3 points [-]

Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer's stance.

Indeed, I was just trying to figure out how someone with money or power who wants to know what is the right thing to do but who does not have the smarts should do. Someone like a politician or billionaire who would either like to support some AGI research or the SIAI. How are they going to decide what to do if all AGI experts tell them that there is no risk from AGI research and that the SIAI is a cult when at the same time the SIAI tells them the AGI experts are intellectual impotent and the SIAI is the only hope for humanity to survive the AI revolution. What should someone who does not have the expertise or smarts to estimate those claims, but who nevertheless has to decide how to use his power, should do? I believe this is not an unrealistic scenario as many rich or powerful people want to do the right thing, yet do not have the smarts to see why they should trust Yudkowsky instead of hundreds of experts.

Comment author: XiXiDu 19 November 2010 10:15:20AM *  1 point [-]

For what it's worth, Eliezer has been influential/persuasive enough to get the SIAI created and funded despite having absolutely no academic qualifications. He's also responsible for coining "Seed AI".

Interesting, when did he come up with the concept of "Seed AI". Because it is mentioned in Karl Schroeder's Ventus (Tor Books, 2000.) ISBN 978-0312871970.

Comment author: Risto_Saarelma 19 November 2010 12:11:31PM *  1 point [-]

Didn't find the phrase "Seed AI" there. One plot element is a "resurrection seed", which is created by an existing, mature evil AI to grow itself back together in case it's main manifestation is destroyed. A Seed AI is a different concept, it's something the pre-AI engineers put together that grows into a superhuman AI by rewriting itself more and more powerful. A Seed AI is specifically a method to get to AGI from not having one, not just an AI that grows from a seed-like thing. I don't remember recursive self-improvement being mentioned with the seed in Ventus.

A precursor concept where the initial AI bootstraps itself by merely learning things, not necessarily by rewriting it's own architecture, goes all the way back to Alan Turing's 1950 paper on machine intelligence.

Comment author: XiXiDu 19 November 2010 12:36:58PM *  1 point [-]

Here is a quote from Ventus:

Look at it this way. Once long ago two kinds of work converged. We'd figured out how to make machines that could make more machines. And we'd figured out how to get machines to... not exactly think, but do something very much like it. So one day some people built a machine which knew how to build a machine smarter than itself. That built another, and that another, and soon they were building stuff the men who made the first machine didn't even recognize.

[...]

And, some of the mechal things kept developing, with tremendous speed, and became more subtle than life. Smarter than humans. Conscious of more. And, sometimes, more ambitious. We had little choice but to label them gods after we saw what they could do--namely, anything.

Comment author: timtyler 20 November 2010 12:58:29PM *  1 point [-]

...and here's a quote from I.J. Good, from 1965:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

He didn't coin the term "Seed AI" either.

Comment author: XiXiDu 19 November 2010 12:34:30PM 0 points [-]

They did not command the wealth of nations, these researchers. Although their grants amounted to millions of Euros, they could never have funded a deep-space mission on their own, nor could they have built the giant machineries they conceived of. In order to achieve their dream, they built their prototypes only in computer simulation, and paid to have a commercial power satellite boost the Wind seeds to a fraction of light speed. [...] no one expected the Winds to bloom and grow the way they ultimately did.

It is further explained that the Winds were designed to evolve on their own so they are not mere puppets of human intentions but possess their own intrinsic architecture.

In other places in the book it is explained how humans did not create their AI Gods but that they evolved themselves from seeds designed by humans.

Comment author: Jack 19 November 2010 12:18:50PM 0 points [-]

Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer's stance.

I don't think the failure of someone to be convinced of some position is ever strong evidence against that position. But this argument here is genuinely terrible. I disagree with person x about y, therefore person x is wrong about z? Do we even have to go into why this is fallacious?

Comment author: wedrifid 19 November 2010 06:06:36PM *  2 points [-]

I don't think the failure of someone to be convinced of some position is ever strong evidence against that position.

Ever is a strong word. If a competent expert in a field who has a known tendency to err slightly on the side of too much openness to the cutting edge fails to be convinced by a new finding within his field that says an awful lot.

I disagree with person x about y, therefore person x is wrong about z? Do we even have to go into why this is fallacious?

That is simply not the form of the argument you quote. "Ben Goertzel believes in psychic phenomenon" can not be represented as "I disagree with person x ".

Comment author: Jack 19 November 2010 06:16:05PM *  0 points [-]

That is simply not the form of the argument you quote. "Ben Goertzel believes in psychic phenomenon" can not be represented as "I disagree with person x ".

I'm being generous and giving the original comment credit for an implicit premise. As stated the argument is "Person x believes y, therefore person x is wrong about z." this is so obviously wrong it makes my head hurt. WrongBot's point is that someone has to have a poor reasoning capacity to believe in psy. But since he didn't provide any evidence to that effect it reduces to 'I disagree with Goertzel about psy'.

Fair point re: "ever".

Comment author: WrongBot 19 November 2010 06:32:37PM 4 points [-]

I generally don't try to provide evidence for every single thing I say, and I am especially lax about things that I consider to be incredibly obvious.

But I'm annoyed enough to lay out a very brief summary of why belief in PSI is ludicrous:

  • It isn't permitted by known physics.
  • There are no suggested mechanisms (so far as I'm aware) for PSI which do not contradict proven physical laws.
  • The most credible studies which claim to demonstrate PSI have tiny effect sizes, and those haven't been replicated with larger sample sizes.
  • Publication bias.
  • PSI researchers often seem to possess motivated cognition.
  • We've analyzed the functioning of individual neurons pretty closely. If there are quantum microtubules or other pseudoscientific nonsense in them, they don't seem to affect how those individual neurons behave.
  • Etc.
Comment author: wedrifid 19 November 2010 06:25:44PM 2 points [-]

As stated the argument is "Person x believes y, therefore person x is wrong about z." this is so obviously wrong it makes my head hurt.

It would be wrong if it were a logical deduction instead of an inference. That is, if WrongBot actually wrote 'therefore' or otherwise signaled absolute deductive certainty then he would be mistaken. As is he presents it as evidence, which it in fact is.

WrongBot's point is that someone has to have a poor reasoning capacity to believe in psy. But since he didn't provide any evidence to that effect it reduces to 'I disagree with Goertzel about psy'.

There is a clear implied premise 'psychic phenomenon are well known to be bullshit'. Not all baseline premises must be supported in an argument. Instead, the argument should be considered stronger or weaker depending on how reliable the premises are. I don't think WrongBot loses too much credibility in this case by dismissing psychic phenomenon.

Comment author: komponisto 19 November 2010 01:05:22PM 2 points [-]

I disagree with person x about y, therefore person x is wrong about z? Do we even have to go into why this is fallacious?

The extent to which it is fallacious depends rather strongly on what y and z (and even x) are, it seems to me.

Comment author: Jack 19 November 2010 01:11:13PM *  0 points [-]

Any argument of this nature needs to include some explanation of why someone's ability to think about y is linked to their ability to think about z. But even with that (which wasn't included in the comment) you can only conclude that y and z imply each other. You can't just conclude z.

In other words, you have to show Goertzel is wrong about psychic phenomenon before you can show that his belief in it is indicative of reasoning flaws elsewhere.

Comment author: komponisto 19 November 2010 01:30:48PM *  1 point [-]

I don't disagree in principle, but psychic phenomena are pretty much fundamentally ruled out by current physics. So a person's belief in them raises serious doubts about that person's understanding of science at the very least, if not their general rationality level.

Comment author: WrongBot 19 November 2010 05:56:48PM 3 points [-]

If someone is unable to examine the available evidence and come to a sane conclusion on a particular topic, this makes it less likely that they are able to examine the available evidence and to sane conclusions on other topics.

I don't take Goertzel seriously for the same reason I don't take young earth creationists seriously. It's not that I disagree with him, it's that his beliefs have almost no connection to reality.

(If it makes you feel better, I have read some of Goertzel's writing on AGI, and it's stuffed full of magical thinking.)

Comment author: ata 19 November 2010 06:28:08PM 5 points [-]

(If it makes you feel better, I have read some of Goertzel's writing on AGI, and it's stuffed full of magical thinking.)

I'd be interested to hear more about that.

Comment author: WrongBot 20 November 2010 02:29:33AM 9 points [-]

From Ten Years to a Positive Singularity:

And computer scientists haven’t understood the self – because it isn’t about computer science. It’s about the emergent dynamics that happen when you put a whole bunch of general and specialized pattern recognition agents together – a bunch of agents created in a way that they can really cooperate – and when you include in the mix agents oriented toward recognizing patterns in the society as a whole.

and

The goal systems of humans are pretty unpredictable, but a software mind like Novamente is different – the goal system is better-defined. So one reasonable approach is to make the first Novamente a kind of Oracle. Give it a goal system with one top-level goal: To answer peoples’ questions, in a way that’s designed to give them maximum understanding.

From The Singularity Institute's Scary Idea (And Why I Don't Buy It):

It's possible that with sufficient real-world intelligence tends to come a sense of connectedness with the universe that militates against squashing other sentiences.

From Chance and Consciousness:

At the core of this theory are two very simple ideas:

1) that consciousness is absolute freedom, pure spontaneity and lawlessness; and

2) that pure spontaneity, when considered in terms of its effects on structured systems, manifests itself as randomness. (Emphasis his.)

And pretty much all of On the Algebraic Structure of Consciousness and Evolutionary Quantum Computation.

This is all just from fifteen minutes of looking around his website. I'm amazed anyone takes him seriously.

Comment author: Jack 19 November 2010 06:28:38PM 1 point [-]

I don't take Goertzel seriously for the same reason I don't take young earth creationists seriously. It's not that I disagree with him, it's that his beliefs have almost no connection to reality.

From what I've seen, the people who comment here who have read Broderick's book have come away, if not convinced psy describes some real physical phenomena, convinced that the case isn't at all open and shut the way young earth creationism is. When an issue is such that smart, sane people can disagree then you have to actually resolve the object level disagreement before you can use someone's beliefs on the issue in a general argument about their rationality. You can't just assume it as you do here.

Comment author: wedrifid 19 November 2010 06:41:59PM *  2 points [-]

You can't just assume it as you do here.

Yes, here WrongBot is safe to assume basic physics.

Edit for the sake of technical completeness: And biology.