Perplexed comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 30 October 2010 07:01:28PM 1 point [-]

Have you read it?

I've looked at it.

I believe it is utter nonsense.

That is my impression too. Which is why I don't understand why you are complaining about censorship of ideas and wondering why EY doesn't spend more time refuting ideas.

As I understand it, we are talking about actions that might be undertaken by an AI that you and I would call insane. The "censorship" is intended to mitigate the harm that might be done by such an AI. Since I think it possible that a future AI (particularly one built by certain people) might actually be insane, I have no problem with preemptive mitigation activities, even if the risk seems miniscule.

In other words, why make such a big deal out of it?

Comment author: timtyler 30 October 2010 09:28:24PM 8 points [-]

Having people delete your comments often rubs people up the wrong way, I find.

Comment author: XiXiDu 30 October 2010 07:05:24PM 0 points [-]

Hmm I haven't. It was meant to explain where that sentence came from in my above copy & paste comment. The gist of the comment was regarding foundational evidence supporting the premise of risks from AI going FOOM.