XiXiDu comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 30 October 2010 05:42:14PM *  3 points [-]

I was too lazy to write this up again, it's copy and paste work so don't mind some inconsistencies. Regarding the quotes, I think that EY seriously believes what he says in the given quotes, otherwise I wouldn't have posted them. I'm not even suggesting that it isn't true, I actually allow for the possibility that he is that smart. But I want to know what I should do and right now I don't see any good arguments.

I'm a supporter and donor and what I'm trying to do here is coming up with the best possible arguments to undermine the credence of the SIAI. Almost nobody else is doing that, so I'm trying my best here. This isn't damaging, this is helpful. Because once you become really popular, people like P.Z. Myers and other much more eloquent and popular people will pull you to pieces if you can't even respond to my poor attempt at being a devils advocate.

I don't have the difficulty comprehending the premises either the ones you have questions here or the others required to make an adequate evaluation for the purpose of decision making.

I don't even know where to start here, so I won't. But I haven't come across anything yet that I had trouble understanding.

I rather suspect that if all those demands were meant you would go ahead and find new rhetorical demands to make.

See that women with red hair? Well, the cleric told me that he believes that she's a witch. But he'll update on evidence if the fire didn't consume her. I said red hair is insufficient data to support that hypothesis and take such extreme measures to test it. He told me that if he came up with more evidence like sorcery I'd just go ahead and find new rhetorical demands.

You appear to be suggesting that Eliezer should censor presentation of his thoughts on the subject so as to prevent people from having nightmares. Spot the irony! ;)

I'm not against free speech and religious freedom but that also applies for my own thoughts on the subject. I believe he could do much more than censoring certain ideas, namely show that they are bogus.

Comment author: wedrifid 30 October 2010 06:39:44PM 4 points [-]

He told me that if he came up with more evidence like sorcery I'd just go ahead and find new rhetorical demands.

[See context for implied meaning if the excerpt isn't clear]. I claimed approximately the same thing that you say yourself below.

I'm a supporter and donor and what I'm trying to do here is coming up with the best possible arguments to undermine the credence of the SIAI. Almost nobody else is doing that, so I'm trying my best here.

I've got nothing against the Devil, it's the Advocacy that is mostly bullshit. Saying you are 'Devil's Advocate' isn't an excuse to use bad arguments. That would be an insult to the Devil!

I don't even know where to start here, so I won't. But I haven't come across anything yet that I had trouble understanding.

You conveyed most of your argument via rhetorical questions. To the extent that they can be considered to be in good faith (and not just verbal tokens intended to influence) some of them only support the position you used them for if you genuinely do not understand them (implying that there is no answer). I believe I quoted an example in the context.

Making an assertion into a question does not give a license to say whatever you want with no risk of direct contradiction. (Even though that is how the tactic is used in practice.)

More concise answer: Then don't ask stupid questions!

Comment author: XiXiDu 30 October 2010 07:02:08PM *  2 points [-]

To the extent that they can be considered to be in good faith (and not just verbal tokens intended to influence) some of them only support the position you used them for if you genuinely do not understand them (implying that there is no answer).

I'm probably too tired to parse this right now. I believe there probably is an answer, but it is buried under hundreds of posts about marginal issues. All those writings on rationality, there is nothing I disagree with. Many people know about all this even outside of the LW community. But what is it that they don't know that EY and the SIAI knows? What I was trying to say is that if I have come across it then it was not convincing enough to take it as serious as some people here obviously do.

It looks like that I'm not alone. Goertzel, Hanson, Egan and lots of other people don't see it as well. So what are we missing, what is it that we haven't read or understood?

Comment author: hairyfigment 01 November 2010 12:11:33AM *  2 points [-]

Goertzel: I could and will list the errors I see in his arguments (if nobody there has done so first). For now I'll just say his response to claim #2 seems to conflate humans and AIs. But unless I've missed something big, which certainly seems possible, he didn't make his decision based on those arguments. They don't seem good enough on their face to convince anyone. For example, I don't think he could really believe that he and other researchers would unconsciously restrict the AI's movement in the space of possible minds to the safe area(s), but if we reject that possibility some version of #4 seems to follow logically from 1 and 2.

Egan: don't know. What I've seen looks unimpressive, though certainly he has reason to doubt 'transhumanist' predictions for the near future. (SIAI instead seems to assume that if humans can produce AGI, then either we'll do so eventually or we'll die out first. Also, that we could produce artificial X-maximizing intelligence more easily then we can produce artificial nearly-any-other-human-trait, which seems likely based on the tool I use to write this and the history of said tool.) Do you have a particular statement or implied statement of his in mind?

Hanson: maybe I shouldn't point any of this out, but EY started by pursuing a Heinlein Hero quest to save the world through his own rationality. He then found himself compelled to reinvent democracy and regulation (albeit in a form closely tailored to the case at hand and without any strict logical implications for normal politics). His conservative/libertarian economist friend called these new views wrongheaded despite verbally agreeing with him that EY should act on those views. Said friend also posted a short essay about "heritage" that allowed him to paint those who disagreed with his particular libertarian vision as egg-headed elitists.

Comment deleted 01 November 2010 09:21:28AM [-]
Comment author: Perplexed 01 November 2010 04:35:15PM 0 points [-]

He wasn't quoting Goertzel, Egan, and Hanson - though his formatting made it look like he was. He was commenting on your claim that these three "don't see it".

Comment author: hairyfigment 01 November 2010 04:12:24PM 0 points [-]

Sorry, I don't know what quotes you mean. You can find a link to the "heritage" post in the wiki-compilation of the debate. Though perhaps you meant to reply to someone else?

Comment author: XiXiDu 30 October 2010 06:52:29PM *  0 points [-]

Saying you are 'Devil's Advocate' isn't an excuse to use bad arguments.

I don't think I used a bad argument, otherwise I wouldn't have done it.

You conveyed most of your argument via rhetorical questions.

Wow, you overestimate my education and maybe intelligence here. I have no formal education except primary school. I haven't taken a rhetoric course or something. I honestly believe that what I have stated would be the opinion of a lot of educated people outside of this community if they came across the arguments on this site and by the SIAI. That is, data and empirical criticism are missing given the extensive use of the idea that is AI going FOOM to justify all kinds of further argumentation.

Comment author: wedrifid 30 October 2010 07:05:22PM 5 points [-]

Wow, you overestimate my education and maybe intelligence here. I have no formal education except primary school. I haven't taken a rhetoric course or something.

"Rhetorical question" is just the name. Asking questions to try convince people rather than telling them outright is something most people pick up by the time they are 8.

I honestly believe that what I have stated would be the opinion of a lot of educated people outside of this community if they came across the arguments on this site and by the SIAI

I think this is true.

. That is

This isn't. That is, the 'that is' doesn't doesn't fit. What educated people will think really isn't determined by things like the below. (People are stupid, the world is mad, etc)

data and empirical criticism are missing given the extensive use of the idea that is AI going FOOM to justify all kinds of further argumentation.

I agree with this. Well, not the 'empirical' part (that's hard to do without destroying the universe.)

Comment author: Vladimir_Nesov 30 October 2010 07:19:31PM 4 points [-]

You conveyed most of your argument via rhetorical questions.

Wow, you overestimate my education and maybe intelligence here. I have no formal education except primary school. I haven't taken a rhetoric course or something.

Indeed, what an irony...

Comment author: XiXiDu 31 October 2010 11:24:05AM 12 points [-]

I'm fighting against giants here. Someone who only mastered elementary school. I believe it should be easy to refute my arguments or show me where I am wrong, point me to some documents I should read up on. But I just don't see that happening. I talk to other smart people online as well, that way I was actually able to overcome religion. But seldom there have been people less persuasive than you when it comes to risks associated with artificial intelligence and the technological singularity. Yes, maybe I'm unable to comprehend it right now, I grant you that. Whatever the reason, I'm not conviced and will say so as long as it takes. Of course you don't need to convince me, but I don't need to stop questioning either.

Here is a very good comment by Ben Goertzel that pinpoints it:

This is what discussions with SIAI people on the Scary Idea almost always come down to!

The prototypical dialogue goes like this.

SIAI Guy: If you make a human-level AGI using OpenCog, without a provably Friendly design, it will almost surely kill us all.

Ben: Why?

SIAI Guy: The argument is really complex, but if you read Less Wrong you should understand it

Ben: I read the Less Wrong blog posts. Isn't there somewhere that the argument is presented formally and systematically?

SIAI Guy: No. It's really complex, and nobody in-the-know had time to really spell it out like that.

Comment author: shokwave 01 November 2010 07:58:34AM 3 points [-]

But seldom there have been people less persuasive than you when it comes to risks associated with artificial intelligence and the technological singularity.

I don't know if there is a persuasive argument about all these risks. The point of all this rationality-improving blogging is that when you debug your thinking, when you can follow long chains of reasoning and feel certain you haven't made a mistake, when you're free from motivated cognition - when you can look where the evidence points instead of finding evidence that points where you're looking! - then you can reason out the risks involved in recursively self-improving self-modifying goal-oriented optimizing processes.

Comment author: Eneasz 01 November 2010 10:59:25PM 3 points [-]

My argument is fairly simple -

If humans found it sufficiently useful to wipe chimpanzees off the face of the earth, we could and would do so.

The level of AI I'm discussing is at least as much smarter than us as we are of chimpanzees.

Comment author: Perplexed 30 October 2010 06:09:15PM 6 points [-]

I believe he could do much more than censoring certain ideas, namely show that they are bogus.

I'm not a big fan of Eliezer, but that complaint strikes me as completely unfair. There is far less censorship here than at a typical moderated blog. And EY does expend some effort showing that various ideas are bogus.

I'm not an insider, or even old-timer, but I have reason to believe that the one single forbidden subject here is censored not because it is believed to be valid or bogus, nor because it casts a bad light on EY and SIAI, but rather because discussing it does no good and may do some harm - something a bit like a ban on certain kinds of racist offensive speech, but different.

And in any case, the "forbidden idea" can always be discussed elsewhere, assuming you can even find anyone that can become interested in the idea elsewhere. The reach of EY's "censorship" is very limited.

Comment deleted 30 October 2010 06:35:20PM *  [-]
Comment author: Perplexed 30 October 2010 07:01:28PM 1 point [-]

Have you read it?

I've looked at it.

I believe it is utter nonsense.

That is my impression too. Which is why I don't understand why you are complaining about censorship of ideas and wondering why EY doesn't spend more time refuting ideas.

As I understand it, we are talking about actions that might be undertaken by an AI that you and I would call insane. The "censorship" is intended to mitigate the harm that might be done by such an AI. Since I think it possible that a future AI (particularly one built by certain people) might actually be insane, I have no problem with preemptive mitigation activities, even if the risk seems miniscule.

In other words, why make such a big deal out of it?

Comment author: timtyler 30 October 2010 09:28:24PM 8 points [-]

Having people delete your comments often rubs people up the wrong way, I find.

Comment author: XiXiDu 30 October 2010 07:05:24PM 0 points [-]

Hmm I haven't. It was meant to explain where that sentence came from in my above copy & paste comment. The gist of the comment was regarding foundational evidence supporting the premise of risks from AI going FOOM.

Comment deleted 30 October 2010 09:37:33PM *  [-]
Comment author: XiXiDu 31 October 2010 10:30:46AM 4 points [-]

Does astronomical value outweigh astronomical low probability? You can come up with all kinds of scenarios that bear astronomical value, an astronomical amount of scenarios if you allow for astronomical low probability. Isn't this betting on infinity?

Comment deleted 31 October 2010 12:42:34PM *  [-]
Comment deleted 31 October 2010 01:21:12PM [-]
Comment deleted 31 October 2010 01:25:47PM [-]
Comment deleted 31 October 2010 01:37:29PM *  [-]
Comment author: Vladimir_Nesov 31 October 2010 01:53:35PM *  0 points [-]

As I said, explanations exist. Don't confuse with actual good understanding, which as far as I know nobody managed to attain yet.

Comment deleted 31 October 2010 05:42:54PM *  [-]
Comment deleted 31 October 2010 06:02:34PM [-]
Comment deleted 31 October 2010 06:29:42PM [-]
Comment deleted 31 October 2010 06:45:15PM [-]
Comment deleted 31 October 2010 01:24:53PM [-]
Comment deleted 31 October 2010 01:29:56PM *  [-]
Comment deleted 31 October 2010 03:20:48PM [-]
Comment deleted 31 October 2010 06:20:36PM [-]
Comment author: timtyler 31 October 2010 12:18:46PM *  4 points [-]

Having such beliefs with absolute certainty is incorrect, we don't have sufficient understanding for that, but weak beliefs multiplied by astronomical value lead to the same drastic actions, whose cost-benefit analysis doesn't take notice of small inconveniences such as being perceived to be crazy.

The unabomber performed some "drastic actions". I expect he didn't mind if he was "perceived to be crazy" by others - although he didn't want to plead insanity.

Comment deleted 01 November 2010 03:04:30AM [-]
Comment author: Perplexed 01 November 2010 03:10:41AM 0 points [-]

The motivation for the censorship is not to keep the idea from the AGI. It is to keep the idea from you. For your own good.

Seriously. And don't ask me to explain.

Comment author: Eneasz 01 November 2010 10:51:30PM *  4 points [-]

Here's the problem: I have read it. And I may even agree that this is a serious issue. I don't trust myself to be intelligent enough to decide one way or the other, so I'll defer to Yudkowsky in this case.

But I have already read it. And it is extremely unlikely that I ever would have read it if it wasn't for the fact that it was banned, there was a huge kerfuffle, and we lost a good community member. The censorship itself probably caused this idea to propagate more than it ever could have if simply left alone. The Streisand Effect again.

The only thing that mentioning it can do is to spread it further. People who don't care will continue to mention it, but people who do shouldn't say anything about it at all. Not even to justify it, not even to warn away from it. That only builds the allure of the mysterious. That's what got me searching for it in the first place.

You don't hide the Necronomicon by constantly telling everyone to stay away from it, and assuring them you can't explain why for their own good. You hide it by never mentioning it at all.

Comment author: Perplexed 01 November 2010 11:28:58PM 1 point [-]

Good idea. Lots of luck enforcing that.

Comment author: Eneasz 02 November 2010 03:02:33AM 0 points [-]

Enforcing? Twas just a suggestion. But if you really think it's a good idea, please down-vote my comment so it'll fall below the cut-off and casual browsers won't see it. :) That doesn't give it the aura of censored Forbidden Fruit, but it will cause Trivial Inconvenience