XiXiDu comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 30 October 2010 04:46:05PM *  5 points [-]

The current state of evidence IS NOT sufficient to scare people up to the point of having nightmares

You appear to be suggesting that Eliezer should censor presentation of his thoughts on the subject so as to prevent people from having nightmares. Spot the irony! ;)

and ask them for most of their money.

Eliezer asks people for money. That hardly makes him unique. Neither he nor anyone else is obliged to get your permission before they ask for donations in support of their cause. It seems to me that you expect more from the SIAI than you do from other well meaning organisations simply because there is actually a chance that the cause may make a significant long term difference. As opposed to virtually all the rest - those we know are pointless!

What if someone came along making coherent arguments about some existential risk about how some sort of particle collider might destroy the universe? I would ask what the experts think who are not associated with the person who makes the claims. What would you think if he simply said, "do you have better data than me"? Or, "I have a bunch of good arguments"? If you say that some sort of particle collider is going to destroy the world with a probability of 75% if run, I'll ask you for how you came up with these estimations. I'll ask you to provide more than a consistent internal logic but some evidence-based prior.

I rather suspect that if all those demands were meant you would go ahead and find new rhetorical demands to make.

So take my word for it, I know more than you do, no really I do, and SHUT UP. -- Eliezer Yudkowsky (Reference)

You have to list your primary propositions on which you base further argumentation, from which you draw conclusions and which you use to come up with probability estimations stating risks associated with former premises. You have to list these main principles so anyone who comes across claims of existential risks and a plead for donation, can get an overview. Then you have to provide the references, if you believe they give credence to the ideas, so that people see that all you say isn't made up but based on previous work and evidence by people that are not associated with your organisation.

That quote is out of context. While I do happen to hold Eliezer's behavior in that context in contempt, the way the quote is presented here is misleading. It is not relevant to your replies and only relevant to the topic here by virtue of Eliezer's character.

Is smarter than human intelligence possible in a sense comparable to the difference between chimps and humans?

This is a community devoted to refining the art of rationality. How is it rational to believe the Scary Idea without being able to tell if it is more than an idea?

Speak for yourself. I don't have the difficulty comprehending the premises either the ones you have questions here or the others required to make an adequate evaluation for the purpose of decision making.

Neither I nor Eliezer and the SIAI need to force understanding of the Scary Idea upon you for it to be rational for us to place credence on it. The same applies to other readers here. That is not to say that more work producing the documentation of the kind that you describe would not be desirable.

Comment author: XiXiDu 31 October 2010 10:59:34AM *  5 points [-]

This comment will be downvoted but I hope you people will actually explain yourself and not just click 'Vote down', every bot can do that.

Now that I've slept I read your comment again and I don't see any justification for why it got upvoted even once. I never claimed that EY can't ask for money, you are creating a straw man there. You also do not know what I do expect from other organisations. Further, it is not fallacious to suspect that Yudkowsky has some responsibility if people get nighmares from ideas that he would be able to resolve. If he really believes those things, it is of course his right to proclaim them. But the gist of my comment was meant to inquire about the foundations of those beliefs and stating that it does not appear to me that they are based on evidence which makes it legally right but ethically irresponsible to tell people to worry to such an extent or even not to tell them not to worry.

I rather suspect that if all those demands were meant you would go ahead and find new rhetorical demands to make.

I just don't know how to parse this. I mean what I asked for and I do not ask for certainty here. I'm not doubting evolution and climate change. The problem is that even a randomly picked research paper likely bears more analysis, evidence and references than all of LW and the SIAI' documents together regarding risks posed by recursive self-improvement from artificial general intelligence.

That quote is out of context.

The quotes have been relevant as they showed that Yudkowsky clearly believes in his intellectual and epistemic superiority, yet any corroborative evidence seems to be missing. Yes, there is this huge amount of writings on rationality and some miscellaneous musing on artificial intelligence. But given how the idea of risks from AGI is weighted by him, it is just the cherry on top of marginal issues that do not support the conclusions.

Speak for yourself. I don't have the difficulty comprehending the premises either the ones you have questions here or the others required to make an adequate evaluation for the purpose of decision making.

I don't have a difficulty to comprehend them either. I'm questioning the propositions, the conclusions drawn and further speculations based on those premises.

Neither I nor Eliezer and the SIAI need to force understanding of the Scary Idea upon you for it to be rational for us to place credence on it.

This is ridiculous. I never said you are forced to explain yourself. You are forced to explain yourself if you want people like me to take you serious.

Comment author: timtyler 31 October 2010 04:31:54PM *  2 points [-]

The quotes have been relevant as they showed that Yudkowsky clearly believes in his intellectual and epistemic superiority, yet any corroborative evidence seems to be missing. Yes, there is this huge amount of writings on rationality and some miscellaneous musing on artificial intelligence. [...]

Yudkowsky is definitely a clever fellow. He may not have fancy qualifications - and he is far from infallible - but he is pretty smart.

In the particular post in question, I am pretty sure he was being silly - which is a rather unfortunate time to be claiming superiority.

However, I don't really know. The stunt created intrigue, mystery, the forbidden, added to the controversy. Overall, Yudkowsky is pretty good at marketing - and maybe this was a taste of it.

I wonder if his Harry Potter fan-fic is marketing - or else how he justifies it.

Comment author: wedrifid 31 October 2010 02:53:54PM *  0 points [-]

This is ridiculous. I never said you are forced to explain yourself. You are forced to explain yourself if you want people like me to take you serious.

If you had restrained your claim in that way (ie. not made the claim that I had quoted in the above context) then I would have agreed with you.

Comment author: XiXiDu 31 October 2010 03:12:04PM 2 points [-]

I cannot account for every possible interpretation in what I write in a comment. It is reasonable not to infer oughts from questions. I said:

This is a community devoted to refining the art of rationality. How is it rational to believe the Scary Idea without being able to tell if it is more than an idea?

That is, if you can't explain yourself why you hold certain extreme beliefs then how is it rational for me to believe that the credence you place on it is justified? The best response you came up with was telling me that you are able to understand and that you don't have to force this understanding onto me to believe into it yourself. That is a very poor argument and that is what I called ridiculous. Even more so as people voted it up, which is just sad.

I though this has been sufficiently clear from what I wrote before.

Comment author: Perplexed 31 October 2010 03:56:01PM 5 points [-]

That is a very poor argument and that is what I called ridiculous. Even more so as people voted it up, which is just sad.

And it is at this point in the process that an accomplished rationalist says to himself, "I am confused", and begins to learn.

My impression is that you and Wedrifid are talking past each other. You think that you both are arguing about whether uFAI is a serious existential risk. Wedrifid isn't even concerned with that. He is concerned with "process questions" - with the analysis of the dialog that you two are conducting, rather than the issue of uFAI risk. And the reason he is being upvoted is because this forum, believe it or not, is a process question forum. It is about rationality, not about AI. Many people here really aren't that concerned about whether Goertzel or Yudkowsky has a better understanding of uFAI risks. They just have a visceral dislike of rhetorical questions.

If you want to see the standard arguments in favor of the Scary Idea, follow Louie's advice and read the papers at the SIAI web site. But if you find those arguments unsatisfactory (and I suspect you will) exercise some care if you come looking for a debate on the question here on Less Wrong. Because not everyone who engages with you here will be engaging you on the issue that you want to talk about.

Comment author: wedrifid 31 October 2010 08:10:07PM 5 points [-]

Many people here really aren't that concerned about whether Goertzel or Yudkowsky has a better understanding of uFAI risks.

I am somewhat more interested in understanding why Gortzel would say what he says about AI. Just saying 'Gortzel's brain doesn't appear to work right' isn't interesting. But the Hansonian signalling motivations behind academic posturing is more so.

Comment author: wedrifid 31 October 2010 08:00:54PM *  1 point [-]

Well said.

(Although to be more precise I don't have a visceral dislike of rhetorical questions per se. It is the use of rhetoric to subvert reason that produces the visceral reaction, not the rhetoric(al question) itself.)