Perplexed comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (432)
I've looked at it.
That is my impression too. Which is why I don't understand why you are complaining about censorship of ideas and wondering why EY doesn't spend more time refuting ideas.
As I understand it, we are talking about actions that might be undertaken by an AI that you and I would call insane. The "censorship" is intended to mitigate the harm that might be done by such an AI. Since I think it possible that a future AI (particularly one built by certain people) might actually be insane, I have no problem with preemptive mitigation activities, even if the risk seems miniscule.
In other words, why make such a big deal out of it?
Having people delete your comments often rubs people up the wrong way, I find.
Hmm I haven't. It was meant to explain where that sentence came from in my above copy & paste comment. The gist of the comment was regarding foundational evidence supporting the premise of risks from AI going FOOM.